Use supported cloud-based model providers

This page outlines the steps for using the integration with supported cloud-based AI model providers.

To configure and use a cloud-hosted model provider, follow these steps:

  1. Go to the Provider settings tab.

  2. Select a cloud-hosted model provider from the options in the Model provider dropdown.

  3. Enter the API key for the selected provider in the API key field. Refer to the selected provider’s documentation for instructions on obtaining and managing your API key.

To avoid subscription fees associated with cloud-based model providers’ APIs, consider using a local model instead. This allows you to run unlimited queries locally without relying on any external API.

  1. In the Model dropdown field, type or select the exact model identifier supported by the chosen provider (e.g., gpt-4 for Open AI or gemini-3 for Google AI Gemini).

  2. The Request parameters field accepts a JSON object with valid model parameters like temperature, stop, and max_tokens. For details, see the selected provider’s documentation.

  3. Use the Role dropdown to select or type one of the supported roles below. This free-type field requires exact role names recognised by the system:

Role
Description

Assistant

Represents the model’s responses based on user messages. Responsible for generating replies.

System

Specifies how the model should respond. Example: “You are a helpful assistant.”

User

Represents user queries and provides context for the model’s response.

Last updated

Was this helpful?