Enable built-in AI, connect OpenAI, Claude, or self-hosted Ollama, and deliver insights across your tasks and knowledge base.
Works with OpenAI, Claude, or self-hosted Ollama

Toggle AI on in workspace settings, connect your provider, and use agents across tasks and knowledge. No separate setup, no extra platforms to learn.
Click your avatar, choose Settings, then select AI under the Workspace section.
Use OpenAI (GPT-5 Mini), Claude (Claude 4), or self-hosted Ollama for local models.
Paste your API key or Ollama server URL, then use Test connection to unlock AI features.
Bring your own OpenAI or Claude key, or keep everything on your network with Ollama. Test connections before rollout so your team can trust results from day one.
Turn on AI features from workspace settings
OpenAI GPT-5 Mini, Claude 4, or self-hosted Ollama
Keys stay in Neuphlo's backend; Ollama keeps data on your network
Test connections to unlock AI across your workspace
Use GPT-5 Mini through OpenAI with your own API key. Great for fast, general-purpose assistance.
Bring your Claude 4 key for thoughtful responses and longer context windows.
Run local models on your own Ollama server over HTTPS or VPN. Keep data on your network for maximum privacy.
See what changed in your workspace with AI-powered summaries. Insights call out important events and suggest what to do next so teams stay aligned.


Ask natural-language questions and get concise responses sourced from your articles and help center. Keep documentation current and the agent stays reliable for everyday questions.
API keys are stored securely and never leave Neuphlo's backend. Use Ollama when you need traffic to stay on your own network, and test every connection before enabling AI across the workspace.
Secure storage
Keys never leave Neuphlo's backend and can be rotated anytime.
Local-first option
Point to an Ollama host over HTTPS or VPN to keep data in your environment.
Confidence checks
Use the Test connection button to validate credentials before rollout.
Turn on AI, pick your provider, and ship faster with summaries and answers that stay in context.