Gen AI Configs
Custom Model Server
Configure Onyx to use a Custom Model Server via requests
Onyx can also make requests to an arbitrary model server via REST requests. Optionally an access token can be passed in. To customize the request format and handling of the response, it may be necessary to update/rebuild the Onyx containers.
Extending Onyx to be compatible with your custom model server
There’s a very minimal interface to be implemented which can support any arbitrary LLM Model Server. Simply update the code here and rebuild.
The default implementation is compatible with the blog demo shown below.
Onyx with self hosted Llama-2-13B-chat-GGML
using a custom FastAPI Server.
- See the Medium blog post.
- This demo uses Google Colab to access a free GPU but this is not suitable for long term deployments