Collabase Brain
Brain is the master switch for all AI features across the platform. When Brain is disabled, the AI command in the editor is hidden and no AI requests are sent to any provider. Toggle Collabase Brain on or off in Settings → AI. All other AI settings take effect only when Brain is enabled.Supported AI providers
Collabase works with both local (self-hosted) and cloud providers. Choose the option that fits your privacy and infrastructure requirements.Ollama
Run models locally on your server. No data leaves your infrastructure. Ideal for air-gapped or privacy-sensitive deployments.
OpenAI
Use GPT-4o, GPT-4o-mini, or any model available through the OpenAI API.
Google Gemini
Use Gemini 2.0 Flash and other Gemini models via the Google AI API.
Azure AI Foundry
Enterprise Azure OpenAI models deployed in your Azure subscription. Supports custom endpoint URLs.
Groq
Fast cloud inference. Groq offers a generous free tier and low-latency responses.
Select Disabled to turn off AI processing entirely without disabling Collabase Brain globally. This is useful when you want to keep Brain enabled in principle but pause the provider connection temporarily.
Configuring a provider
Navigate to Settings → AI and select your provider from the grid.Ollama (local)
Ollama runs models on your own server. Install Ollama separately and pull the models you want to use before configuring it here.| Field | Description | Default |
|---|---|---|
| Ollama Base URL | The URL where your Ollama server is running. | http://localhost:11434 |
| Model | The name of the Ollama model to use. | llama3.2 |
OpenAI
| Field | Description | Example |
|---|---|---|
| API Key | Your OpenAI API key. Stored encrypted, never displayed after saving. | sk-... |
| Model | The OpenAI model name to use. | gpt-4o-mini |
Google Gemini
| Field | Description | Example |
|---|---|---|
| API Key | Your Google AI Studio API key. | AIza... |
| Model | The Gemini model name to use. | gemini-2.0-flash |
Azure AI Foundry
| Field | Description | Example |
|---|---|---|
| Azure Endpoint URL | The base URL of your Azure AI Foundry deployment. | https://resource.services.ai.azure.com/openai |
| API Key | Your Azure API key. | ... |
| Model | The deployed model name as configured in Azure. | gpt-4o |
Groq
| Field | Description | Example |
|---|---|---|
| API Key | Your Groq API key from console.groq.com. | gsk_... |
| Model | The Groq model name. | llama-3.3-70b-versatile |
Saving and testing
After filling in your provider’s fields, click Test Connection to verify that the credentials and model configuration are correct. Then click Save to apply the settings.The AI command in the editor
Once Brain is enabled and a provider is configured, users can trigger AI actions anywhere in the editor using the/ai slash command.
Type /ai on any line to open the AI action menu. Available actions include:
- Summarise — Compress the current page or selection into a concise summary.
- Generate — Write content from a prompt.
- Translate — Translate selected text to a target language.
- Improve writing — Refine grammar, clarity, and tone.
- Fix spelling & grammar — Correct errors in the selection.
- Make shorter / Make longer — Adjust the length of the selected text.
- Continue writing — Extend the text from the current cursor position.
RAG and semantic search
Collabase supports Retrieval-Augmented Generation (RAG) to ground AI responses in your actual content. When RAG is enabled, page content is chunked into 512-token segments and indexed as vector embeddings. The Brain chat feature uses these embeddings to retrieve relevant context before generating a response. Enable RAG / Semantic Search in Settings → AI and choose an embedding source:- Same as AI provider
- Custom provider
Collabase reuses the same provider and model configured above for generating embeddings. This is the simplest option and works well when your AI provider supports embedding models.
Embeddings are generated asynchronously when pages are saved. Newly created or updated pages become available for semantic search after the indexing job completes. Indexing typically takes a few seconds per page.
