How to deploy Onyx on your local machine
The most common source of issues is under-resourcing. Before beginning, check the system requirements here.
Note: This is just one way to run Onyx. Onyx can also be run on Kubernetes, there are provided Kubernetes manifests and Helm charts in the deployment directory.
(Optional) configure Onyx
Bring up your docker engine and run:
api_server
may take some time.
If you see This site can’t be reached
in your browser despite all containers being up and running,
check the api_server
logs and make sure you see Application startup complete
.Killed
in the logs, you may need to increase the amount of memory given to Docker.
For recommendations, check the system requirements here.These commands are also used to redeploy if any .env variables are updated
Note: On the initial visit, Onyx will prompt for a GenAI API key.
For example, you can get an OpenAI API key at: https://platform.openai.com/account/api-keys
Onyx relies on Generate AI models to provide parts of its functionality. You can choose any LLM provider from the admin panel or even self-host a local LLM for a truely airgapped deployment.
add -v
at the end to additionally delete the volumes (containing users, indexed documents, etc.)
How to deploy Onyx on your local machine
The most common source of issues is under-resourcing. Before beginning, check the system requirements here.
Note: This is just one way to run Onyx. Onyx can also be run on Kubernetes, there are provided Kubernetes manifests and Helm charts in the deployment directory.
(Optional) configure Onyx
Bring up your docker engine and run:
api_server
may take some time.
If you see This site can’t be reached
in your browser despite all containers being up and running,
check the api_server
logs and make sure you see Application startup complete
.Killed
in the logs, you may need to increase the amount of memory given to Docker.
For recommendations, check the system requirements here.These commands are also used to redeploy if any .env variables are updated
Note: On the initial visit, Onyx will prompt for a GenAI API key.
For example, you can get an OpenAI API key at: https://platform.openai.com/account/api-keys
Onyx relies on Generate AI models to provide parts of its functionality. You can choose any LLM provider from the admin panel or even self-host a local LLM for a truely airgapped deployment.
add -v
at the end to additionally delete the volumes (containing users, indexed documents, etc.)