Guide
Configure Onyx to use models served through Bifrost.Onyx connects to your Bifrost gateway by fetching the models exposed at its
/v1/models endpoint.
Bifrost can expose models from multiple vendors behind a single endpoint.Set Up Your Bifrost Gateway
Make sure your Bifrost deployment is reachable from the Onyx server.You will need your Bifrost API Base URL.If your Bifrost deployment requires authentication, also generate a Bifrost API Key.
Navigate to Language Models
Access the Admin Panel from your user profile icon, then navigate to Configuration → Language Models.
Configure Bifrost
Select Bifrost from the available providers.Give your provider a Display Name.Enter the API Base URL for your Bifrost gateway.If your gateway requires authentication, enter your API Key.Click Fetch Available Models to load the language models currently exposed by Bifrost.
Review the Imported Models
Onyx will import the model IDs returned by Bifrost and use the display names returned by the gateway when available.This is useful when your Bifrost gateway exposes models from multiple vendors such as Anthropic, OpenAI, or others.
Configure Default and Fast Models
The Default Model is selected automatically for new custom Agents and Chat sessions.Designating a Fast Model is optional.
This Fast Model is used behind the scenes for quick operations such as evaluating the type of message,
generating different queries (query expansion), and naming the chat session.
Choose Visible Models
In the Advanced Options, you will see a list of all models available from this provider.
You may choose which models are visible to your users in Onyx.Setting visible models is useful when a provider publishes multiple models and versions of the same model.