Skip to main content
If you’d like to use an AI provider that isn’t natively supported by ASSIST AI, you have the option to set up a custom inference provider.
Note that your custom provider must offer OpenAI-compatible API endpoints.
  1. Set Up Your Custom Inference Provider Identify your provider’s API base URL. It should follow a format similar to https://yourprovider.com/v1
  2. Go to the AI Model Configuration Page Open the Admin Panel via your profile icon, then navigate to Admin Panel → LLM.
  3. Configure the Custom Inference Provider
    • Choose Add Custom LLM Provider from the list of available providers. Assign a Display Name to your provider and enter your model’s Provider Name.
    • Keep in mind that the Provider Name must correspond to a provider listed in Litellm’s supported providers. For reference, in this example the provider name used is vertex_ai.
  4. Configure Optional Fields and Models Enter the provider’s Base URL and fill in any other optional fields as needed. Then, under the Model Configurations section, add each of the models you wish to use with your provider.
  5. Set Default and Fast Models
    • The Default Model is automatically applied to new custom Agents and Chat sessions. You may also optionally designate a Fast Model, which handles background tasks such as message classification, query expansion, and chat session naming.
    • If you assign a Fast Model, choose one that is efficient and cost-effective, such as GPT-4.1-mini or Claude 3.7 Sonnet.
  6. Define Provider Access
    • Finally, decide whether this provider should be accessible to all users in ASSIST AI or restricted to specific groups.
    • If set to private, only Admins and the User you explicitly assign will be able to access the provider’s models.