Skip to main content

Configuration

Customize Flowbaby's behavior through VS Code settings.

Automatic Detection

.flowbaby/venv This location prevents conflicts with project virtual environments (e.g., Python Jedi language server overwriting Flowbaby's dependencies).

Unsupported Contexts

The following contexts are not validated or supported in this release:

  • Remote Development: VS Code Remote-SSH, WSL, Dev Containers
  • Multi-root Workspaces: Workspaces with multiple folder roots
  • Multi-user Workstations: Memories are stored per workspace, not per user

Support for these contexts may be added in future releases.

Configuration

Access settings via File → Preferences → Settings → Extensions → Flowbaby:

SettingDescriptionDefault
Flowbaby.enabledEnable/disable automatic memory capture and retrievaltrue
Flowbaby.maxContextResultsMaximum number of memory results to inject into Copilot context3
Flowbaby.maxContextTokensMaximum token budget for retrieved context (higher values may increase latency and memory usage)32000
Flowbaby.searchTopKMaximum number of candidates to request from the Flowbaby search engine before ranking (normalized to be ≥ maxContextResults, hard-capped at 100)10
Flowbaby.ranking.halfLifeDaysRecency half-life (in days) used for ranking; older memories past this window have their relevance score halved7
flowbaby.notifications.showIngestionSuccessShow toast notifications when memory ingestion completes successfully (errors are always shown)true
Flowbaby.pythonPathPython interpreter to use for the Flowbaby bridge; leave as python3 for auto-detection of workspace .venv, or set an explicit pathpython3
Flowbaby.logLevelLogging verbosity level: error, warn, info, or debuginfo
Flowbaby.debugLoggingEnable a dedicated debug output channel with detailed bridge and extension diagnosticsfalse

LLM Configuration

Configure your LLM provider via File → Preferences → Settings → Extensions → Flowbaby:

SettingDescriptionDefault
Flowbaby.llm.providerLLM provider (e.g., openai, anthropic, azure, ollama, custom)openai
Flowbaby.llm.modelLLM model name (e.g., gpt-4o-mini, claude-3-haiku-20240307, llama2)gpt-4o-mini
Flowbaby.llm.endpointCustom LLM endpoint URL for Azure, Ollama, or self-hosted deployments; leave empty for provider defaults(empty)

Provider Examples:

  • OpenAI (default): Leave provider as openai, model as gpt-4o-mini
  • Anthropic Claude: Set provider to anthropic, model to claude-3-opus-20240229
  • Local Ollama: Set provider to ollama, model to your model name, endpoint to http://localhost:11434

Additional Examples:

  • To disable memory temporarily, set Flowbaby.enabled to false in settings