Overview of the Generative AI functionality in Onyx
Onyx supports a large range of LLM hosting services and local/custom such as:
OpenAI
, Anthropic
, Azure OpenAI
, HuggingFace
, Replicate
, AWS Bedrock
, Cohere
, and many others.Note: Most of the different LLM support is provided by the LiteLLM Langchain library and are configured accordingly (see the following sections for some examples).
The Large Language Models are used to interpret the contents from the most relevant documents retrieved via Search. These models extract out the useful knowledge from your documents and generates the AI Answer.
Our default recommendation is to use gpt4
from OpenAI or Claude 3.5 Sonnet
from Anthropic. These are the most powerful and highest quality models available.
Azure OpenAI, Claude through Bedrock, or self-hosted Llama3.1 70B / 405B are also highly recommended.
gpt-4o
)To setup various LLMs, head to the LLM
page on the Admin Panel. A fun thing about Onyx is that
you can setup multiple LLM providers at the same time! This allows you use different models for different
assistants and play to each LLM’s strengths.
See the next sections for some examples on how to configure different providers.
As always, don’t hesitate to reach out to the Onyx team if you have any questions or issues!
Overview of the Generative AI functionality in Onyx
Onyx supports a large range of LLM hosting services and local/custom such as:
OpenAI
, Anthropic
, Azure OpenAI
, HuggingFace
, Replicate
, AWS Bedrock
, Cohere
, and many others.Note: Most of the different LLM support is provided by the LiteLLM Langchain library and are configured accordingly (see the following sections for some examples).
The Large Language Models are used to interpret the contents from the most relevant documents retrieved via Search. These models extract out the useful knowledge from your documents and generates the AI Answer.
Our default recommendation is to use gpt4
from OpenAI or Claude 3.5 Sonnet
from Anthropic. These are the most powerful and highest quality models available.
Azure OpenAI, Claude through Bedrock, or self-hosted Llama3.1 70B / 405B are also highly recommended.
gpt-4o
)To setup various LLMs, head to the LLM
page on the Admin Panel. A fun thing about Onyx is that
you can setup multiple LLM providers at the same time! This allows you use different models for different
assistants and play to each LLM’s strengths.
See the next sections for some examples on how to configure different providers.
As always, don’t hesitate to reach out to the Onyx team if you have any questions or issues!