Use supported cloud-based model providers
This page outlines the steps for using the integration with supported cloud-based AI model providers.
To use the APIs of supported cloud-based model providers, you must have an active account with sufficient credits or a subscription for API usage with the chosen provider. Note that account creation and credit management are handled externally and are outside the scope of this guide.
Using this feature involves transmitting data to the selected provider’s servers. We strongly recommend reviewing each provider’s privacy policy to understand their data handling and retention practices. If you handle sensitive information or require stronger data privacy guarantees, consider using a local model option instead to keep your data secure and private without external transmission.
To configure and use a cloud-hosted model provider, follow these steps:
Go to the
Provider settings
tab.Select a cloud-hosted model provider from the options in the
Model
provider dropdown.Enter the API key for the selected provider in the
API key
field. Refer to the selected provider’s documentation for instructions on obtaining and managing your API key.
In the
Model
dropdown field, type or select the exact model identifier supported by the chosen provider (e.g.,gpt-4
forOpen AI
orgemini-3
forGoogle AI Gemini
).The
Request parameters
field accepts aJSON
object with valid model parameters liketemperature
,stop
, andmax_tokens
. For details, see the selected provider’s documentation.Use the
Role
dropdown to select or type one of the supported roles below. This free-type field requires exact role names recognised by the system:
Assistant
Represents the model’s responses based on user messages. Responsible for generating replies.
System
Specifies how the model should respond. Example: “You are a helpful assistant.”
User
Represents user queries and provides context for the model’s response.
Last updated
Was this helpful?