Guide
Configure Onyx to use models served by LM Studio. Onyx has a built-in integration with LM Studio that auto-discovers your loaded models, including their capabilities (vision, reasoning) and context length.Setup LM Studio and Load Your Models
Download LM Studio from lmstudio.ai and load the models you want to use.Start the LM Studio local server:LM Studio runs on port
1234 by default.If LM Studio is running on a different machine than Onyx,
make sure the server is accessible from the Onyx host (e.g., http://<lm-studio-host>:1234).Navigate to AI Model Configuration Page
Access the Admin Panel from your user profile icon → Admin Panel → LLM
Configure LM Studio
Select LM Studio from the available providers.Give your provider a Display Name.Set the API Base URL to your LM Studio server address (e.g., 
http://localhost:1234).Onyx will automatically connect and discover your loaded models.
Configure Default and Fast Models
The Default Model is selected automatically for new custom Agents and Chat sessions.Designating a Fast Model is optional.
This Fast Model is used behind the scenes for quick operations such as evaluating the type of message,
generating different queries (query expansion), and naming the chat session.
Choose Visible Models
In the Advanced Options, you will see a list of all models available from this provider.
You may choose which models are visible to your users in Onyx.Setting visible models is useful when a provider publishes multiple models and versions of the same model.