Quickstart
How to deploy Onyx on your local machine
Requirements
- Git
- Docker with compose (docker version >= 1.13.0)
Setup
The most common source of issues is under-resourcing. Before beginning, check the system requirements here.
Note: This is just one way to run Onyx. Onyx can also be run on Kubernetes, there are provided Kubernetes manifests and Helm charts in the deployment directory.
- Clone the Onyx repo:
- Navigate to onyx/deployment/docker_compose
-
(Optional) configure Onyx
-
Bring up your docker engine and run:
- To pull images from DockerHub and run Onyx:
- Alternatively, to build the containers from source and start Onyx, run:
- This may take 15+ minutes depending on your internet speed.
- Additionally, once the images have been pulled / built, the initial startup of the
api_server
may take some time. If you seeThis site can’t be reached
in your browser despite all containers being up and running, check theapi_server
logs and make sure you seeApplication startup complete
. - If you see
Killed
in the logs, you may need to increase the amount of memory given to Docker. For recommendations, check the system requirements here.
- Additionally, once the images have been pulled / built, the initial startup of the
These commands are also used to redeploy if any .env variables are updated
- Onyx will now be running on http://localhost:3000.
Generative AI API Key
Note: On the initial visit, Onyx will prompt for a GenAI API key.
For example, you can get an OpenAI API key at: https://platform.openai.com/account/api-keys
Onyx relies on Generate AI models to provide parts of its functionality. You can choose any LLM provider from the admin panel or even self-host a local LLM for a truely airgapped deployment.
Shutting Down
add -v
at the end to additionally delete the volumes (containing users, indexed documents, etc.)