Use the OpenAI's API feature

This page outlines the steps involved in using the OpenAI's integration.

To use this feature, an active OpenAI account with adequate credits for API consumption is required. Please be aware that the creation of an OpenAI account and the credit assignment procedure are beyond the scope of this guide.

For more information on pricing and how to acquire credits, please refer to OpenAI Pricing.

It is important to note that using this feature involves sharing data with OpenAI. Therefore, we highly recommend that you read OpenAI's privacy policy and ensure that you are comfortable with it. If you are working with clients, sensitive projects, or are concerned about data privacy, we recommend upgrading to BurpGPT Pro and using its Local LLM feature instead to keep your data private and secure.


  1. Go to the Azure/OpenAI API tab.

  2. Enter your OpenAI API key into the API key field. You can obtain a key there.

Keep your API key confidential and do not share it with anyone. The API key is linked to your account and can affect your billing. This key is necessary for communicating with the OpenAI API.

If you want to avoid subscription fees for the OpenAI API, consider using the Local LLM feature. With this feature, you can issue unlimited queries without relying on the OpenAI API.

  1. Select one of the pre-built models from the Model dropdown field. The associated number of datapoints used to train the model is displayed under the Model size field.

  2. To optimise the performance and usage of OpenAI's models, set a Max prompt length. By adjusting this parameter, you can optimise the amount of information you can provide to the model.

    • Max prompt length: determines the maximum size of your prompt once the placeholders have been replaced.

  3. Choose a Role from the following options:

    • Assistant: This role is used to represent the model’s responses based on the user messages. The assistant role is responsible for generating the actual response to the user’s query.

    • System: This role is used to specify the way the model answers questions. For example, if the model is designed to be a helpful assistant, the system role would be "You are a helpful assistant".

    • User: This role is equivalent to the queries made by the user. It is used to provide the model with the necessary context to generate a response.

Test model

After configuring the model settings, you can test the selected model by clicking on the Test button. This will send a test query to the OpenAI API.

The results of your query will be displayed in a dialog box, as follows:

Analyse HTTP traffic

Finally, to scan your HTTP traffic against your model and prompt, you can either:

  • Instruct BurpGPT to use the selected OpenAI's model when performing passive scans with Burp Suite by clicking on the Passive Scan: OpenAI's API button.

  • Use the custom context menu actions to send relevant requests for analysis, by simply right-clicking in request/response and selecting Extensions -> BurpGPT Pro -> Send to OpenAI's API.

View GPT-generated insights

A new Information-level severity issue, named GPT-generated insights, will appear under Target -> Site map and Dashboard -> Issue Activity.

This issue will provide detailed information about the query made to the selected model and will also include the model response as illustrated in the following screenshot:

Last updated