BurpGPT
Aegis CyberBurpGPTUpgrade to Pro edition
  • Welcome
  • Getting Started
    • Installation
  • How To
    • Use the OpenAI's API feature
    • Use the Azure OpenAI Service's API feature
    • Use the Local LLM feature
    • Use the Prompt Library feature
    • Use a custom-trained model
  • Help & FAQ
    • FAQ
      • Can BurpGPT editions be used as standalone applications operating independently?
      • Does BurpGPT Pro cover Azure OpenAI Service/OpenAI usage, or do I need an extra subscription?
      • Does the Pro edition's local LLM feature share any data with OpenAI or any third party?
      • How can I cancel my subscription?
      • How to resolve the 'Runtime Error: Bad magic number in .pyc file' error?
      • How to resolve the 'Cannot use chat template functions...' error?
      • If the server displays an error indicating that port 3000 is in use, how can I resolve this issue?
      • Is it possible for BurpGPT editions to produce false positive results?
      • What are the possible causes for an unsuccessful activation of my BurpGPT Pro?
      • What is the limit for device activations with a single BurpGPT Pro license?
    • Changelog
    • Roadmap
Powered by GitBook
On this page
  • Add a prompt to the library
  • Remove prompt(s) from the library
  • Export the prompt library
  • Import/Add entries to the prompt library
  • Send prompt to the OpenAI API or Local LLM tabs

Was this helpful?

  1. How To

Use the Prompt Library feature

This page outlines the features provided by the prompt library.

PreviousUse the Local LLM featureNextUse a custom-trained model

Last updated 8 months ago

Was this helpful?

  1. Go to the Prompt library tab.

  2. From there, you can access the following options:

    • : Save a new prompt to your library.

    • : Delete selected prompts from your library.

    • : Create an importable backup of all your saved prompts by exporting them.

    • : Import previously exported prompts to your library.

    • : Send a prompt directly to the OpenAI API or Local LLM tab.

Add a prompt to the library

To add a new entry to the prompt library, follow these steps:

  1. Click on the Add button.

  2. Populate the Author, Category and Prompt fields.

  3. Confirm by clicking on the Ok button.

The prompt library automatically filters out duplicate entries, ensuring that you do not end up with multiple prompts for the same thing.

Remove prompt(s) from the library

To remove one or multiple entries from the prompt library, follow these steps:

  1. Select the entries you want to remove from the table.

  2. Click on the Remove button.

Export the prompt library

To export the entire prompt library, follow these steps:

  1. Click on the Export button.

  2. Choose the destination location and give a .json extension to the exported file.

It's important to give the exported file a .json extension for it to be recognised as a JSON file.

With the pre-populated built-in examples present by default within the Prompt library tab, the resulting JSON file should contain the following entries:

[
    {
        "id": "3ed5b678-c6fd-4a36-8365-1b3711e1ef7f",
        "author": "Alexandre Teyar",
        "category": "built-in_generic",
        "text": "Analyze the HTTP request and response below for potential security vulnerabilities, specifically focusing on OWASP top 10 vulnerabilities such as SQL injection, XSS, CSRF, and other common web application security threats.\n\nFormat your response as a bullet list with each point listing a vulnerability name and a brief description, in the format and exclude irrelevant information:\n\n- Vulnerability Name: Brief description of vulnerability\n\n=== Request ===\n{REQUEST}\n\n=== Response ===\n{RESPONSE}",
        "created": "2024-01-31T15:10:09.941425100Z"
    },
    {
        "id": "f0f0b1f7-e9ea-40aa-93c6-6360d27e372d",
        "author": "Alexandre Teyar",
        "category": "built-in_biometric",
        "text": "Analyse the HTTP request and response below for potential security vulnerabilities related to the biometric authentication process.\n\nFormat your response as a bullet list with each point listing a vulnerability name and a brief description, in the format and exclude irrelevant information:\n\n- Vulnerability Name: Brief description of vulnerability\n\n=== Request ===\n{REQUEST}\n\n=== Response ===\n{RESPONSE}",
        "created": "2024-01-31T15:10:09.941425100Z"
    },
    {
        "id": "b2f1e0e8-6f29-418f-a107-582f8d14a6c5",
        "author": "Alexandre Teyar",
        "category": "built-in_spa",
        "text": "Analyse the HTTP request and response below for potential security vulnerabilities specific to the <SPA_FRAMEWORK_NAME> framework.\n\nFormat your response as a bullet list with each point listing a vulnerability name and a brief description, in the format and exclude irrelevant information:\n\n- Vulnerability Name: Brief description of vulnerability\n\n=== Request ===\n{REQUEST}\n\n=== Response ===\n{RESPONSE}",
        "created": "2024-01-31T15:10:09.941425100Z"
    }
]

Import/Add entries to the prompt library

To import new entries to the prompt library, follow these steps:

  1. Click on the Import button.

Send prompt to the OpenAI API or Local LLM tabs

To send a prompt directly to the OpenAI API or the Local LLM tab, follow these steps:

  1. Right-click on an entry from the table.

  2. Choose an option from the Send to OpenAI API or Send to Local LLM options.

Choose the JSON file created in the section.

Add
Remove
Export
Import
Send to
Export the prompt library