Skip to main content

Guide

Configure ASSIST AI to use models served by LM Studio. ASSIST AI includes a built-in integration with LM Studio that automatically discovers your loaded models, including their capabilities (such as vision and reasoning) and context length.
  1. Set Up LM Studio and Load Your Models Download LM Studio from lmstudio.ai and load the models you wish to use. Start the LM Studio local server using the following command:
    lms server start --port 1234
    
    For best results, use a model with strong instruction-following and tool-use capabilities (e.g., Qwen 3.5).
  2. Navigate to the AI Model Configuration Page Access the Admin Panel via your user profile icon → Admin Panel → LLM.
  3. Configure LM Studio
    • Select LM Studio from the list of available providers. Assign a Display Name to your provider.
    • Set the API Base URL to your LM Studio server address (e.g., http://localhost:1234). ASSIST AI will automatically connect and discover your loaded models.
      Lm Studio
  4. Configure Default and Fast Models The Default Model is automatically selected for new custom Agents and Chat sessions. Designating a Fast Model is optional - this model is used behind the scenes for quick operations such as evaluating message types, generating query variations (query expansion), and naming chat sessions. If you choose a Fast Model, ensure it is a relatively quick and cost-effective option such as GPT-4.1-mini or Claude 3.7 Sonnet.
  5. Choose Visible Models Under Advanced Options, you will find a full list of models available from this provider. You can select which models are visible to your users within ASSIST AI. This is particularly useful when a provider offers multiple models and versions of the same model.
  6. Designate Provider Access Finally, you can choose whether the provider is publicly accessible to all users in ASSIST AI. If set to private, the provider’s models will only be available to Admins and the User you explicitly assign the provider to.