Navigate to: Admin Panel → Configuration → LLM to access the AI configuration page. From here,
you can select which provider and models you want to use in Onyx.
Cloud providers are either foundation model developers (e.g. OpenAI, Anthropic) or cloud platforms (e.g. AWS,
Google, Azure) that make AI models accessible via hosted API endpoints.Some AI models are only available directly from the model creator (e.g. OpenAI’s GPT models).
Others are distributed through managed cloud services such as AWS Bedrock, Google Vertex AI, or Azure OpenAI.
These platforms provide enterprise-grade access with integration into their broader cloud ecosystem.Onyx natively supports both types.
Oftentimes,
managed cloud services maintain the same Terms of Service and Data Processing Agreements for AI inference as their
other services.
Self-Hosted Providers
Self-hosted providers are tools and frameworks that let you run AI models on your own infrastructure,
rather than relying on an external cloud service.
Self-hosted models are open-weight and available for free download, such as Meta’s Llama models, DeepSeek, and Qwen.Onyx supports any OpenAI-compatible gateway or inference server, like Ollama.