Use the Azure OpenAI Service's API feature

This page outlines the steps involved in using the Azure OpenAI's integration.

To use this feature, an active Azure OpenAI Service environment with adequate credits for API consumption is required. Please be aware that the setup of the Azure environment and the credit assignment procedure are beyond the scope of this guide.

For more information on pricing and how to acquire credits, please refer to Azure OpenAI Service pricing.

Configuration

  1. Go to the Azure/OpenAI API tab.

  2. Enter the URL of your Azure OpenAI Service deployment endpoint into the API endpoint field, e.g., https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15.

  3. Enter your Azure OpenAI Service API key into the API key field.

Keep your API key confidential and do not share it with anyone. The API key is linked to your account and can affect your billing. This key is necessary for communicating with the Azure OpenAI Service API.

If you want to avoid subscription fees for the Azure OpenAI Service API, consider using the Local LLM feature. With this feature, you can issue unlimited queries without relying on the Azure OpenAI Service API.

  1. To optimise the performance and usage of Azure OpenAI Service's models, set a Max prompt length. By adjusting this parameter, you can optimise the amount of information you can provide to the model.

    • Max prompt length: determines the maximum size of your prompt once the placeholders have been replaced.

  2. Choose a Role from the following options:

    • Assistant: This role is used to represent the model’s responses based on the user messages. The assistant role is responsible for generating the actual response to the user’s query.

    • System: This role is used to specify the way the model answers questions. For example, if the model is designed to be a helpful assistant, the system role would be "You are a helpful assistant".

    • User: This role is equivalent to the queries made by the user. It is used to provide the model with the necessary context to generate a response.

Test model

After configuring the model settings, you can test the selected model by clicking on the Test button. This will send a test query to the Azure OpenAI Service API.

The results of your query will be displayed in a dialog box, as follows:

Analyse HTTP traffic

Finally, to scan your HTTP traffic against your model and prompt, you can either:

  • Instruct BurpGPT to use the selected OpenAI's model when performing passive scans with Burp Suite by clicking on the Passive Scan: OpenAI's API button.

  • Use the custom context menu actions to send relevant requests for analysis, by simply right-clicking in request/response and selecting Extensions -> BurpGPT Pro -> Send to OpenAI's API.

View GPT-generated insights

A new Information-level severity issue, named GPT-generated insights, will appear under Target -> Site map and Dashboard -> Issue Activity.

This issue will provide detailed information about the query made to the selected model and will also include the model response as illustrated in the following screenshot:

Last updated