# Changelog

## 2.0 (07/08/2025)

{% hint style="info" %}
This major update introduces expanded model support, a refreshed user interface, and significant usability improvements.
{% endhint %}

### **New Features**

* Multi-provider support (out-of-the-box):
  * `Anthropic`
  * `Google AI Gemini`
  * `Mistral AI`
  * `Open AI`
  * `Local / Custom`
* Adjustable timeout controls to better handle varying network conditions.
* Fine-grained control over `Request parameters` sent to providers.

### **UI/UX Improvements**

* Streamlined, modernised user interface for improved usability.
* Centralised `Provider settings` for easier configuration management.
* Simplified scan toggle with a clear `Scan: Off` / `Scan: On` switch that routes requests and responses based on the current provider settings.
* Unified contextual `Send to <LLM_PROVIDER>` options with the single `Scan with AI (results in Target → Site map)` option that routes requests and responses based on the current provider settings.
* Added in-app tooltips and contextual usage hints.

***

## 1.1 (13/02/2025)

### Bug Fixes

* General bug fixes and performance enhancements, with a focus on improved error handling.

### UI/UX Improvements

* Replaced the `About view` with a streamlined `About menu item` in the BurpGPT Pro menu, which opens a minimalist popup displaying the current version information.
* Added a `Report Bug` menu item to the BurpGPT Pro menu for easier bug reporting and better accessibility.
* Standardised layout margins for a more cohesive and unified feel.
* A series of enhancements, including disabling the `Add` button in the `Prompt library` view when the dialog is open, disabling the `Remove` button when no rows are selected, and expanding the `Add prompt` dialog for easier typing.

***

## 1.0.2 (26/01/2025)

### Bug Fixes

* General bug fixes and performance improvements.
* Fixed a bug introduced in version [#id-1.0-20-08-2024](#id-1.0-20-08-2024 "mention") where using `Send to <LLM_PROVIDERS>` could result in duplicate requests being issued.

### Feature Improvements

#### Local LLM

* Updated the required Python binary version to `3.12.6` from `3.10`.

### UI/UX Improvements

* Added a `BurpGPT Pro` menu item, featuring direct links to the documentation website and release notes for easier access.

***

## 1.0.1 (19/09/2024)

### Highlights

* Removed the unnecessary warning message about `WARNING -> Component type not supported: javax.swing.<COMPONENT>` to avoid confusing users.

### Feature Improvements

* Included a BurpSuite version check to ensure users have the correct version installed, and prompt for an upgrade if needed for BurpGPT Pro.

### Bug Fixes

* General bug fixes and performance improvements.
* Fixed a bug introduced in version [#id-1.0-20-08-2024](#id-1.0-20-08-2024 "mention") where the query was missing from the path in requests sent to `LLM_PROVIDERS`.

***

## 1.0 (20/08/2024)

### Highlights

* Requests and responses to selected LLM providers are now issued directly through Burp Suite, enhancing the integration of BurpGPT.
* Requests to the Azure/OpenAI API are now sent using `HTTP/2`, significantly improving network performance.
* Requests and responses are now viewable directly in the `Logger` tab, providing enhanced visibility and control, along with improved troubleshooting capabilities.
* The performance and stability of the extension have been significantly improved, ensuring a smoother experience.
* A new task is added to the `Dashboard` tab when you select `Extension -> BurpGPT Pro -> Send to <LLM_PROVIDER>` from the Burp-wide context menu. This centralises all BurpGPT-related activities, **except those generated using the passive scan approach**, for easy viewing.

### Bug Fixes

* General bug fixes and performance improvements.
* Enhancing logging and exception management.

### Feature Improvements

#### Azure/OpenAI API and Local LLM

* The `model` and `role` drop-down boxes are now editable, allowing users to specify any model and role they wish to use.
* The `API endpoint` and `API key` textboxes now support copy and cut functions and display an indicator when caps lock is on.
* The `prompt` text area now wraps both lines and words.

#### Local LLM

* Enhanced processing speed by using a quantization library that supports `4-bit` and `8-bit` quantization, effectively reducing model sizes compared to full-precision versions. Note: Python dependencies have been updated and listed [here](https://docs.burpgpt.app/getting-started/installation#prerequisites).
* Request schema reworked to align to standard Instruct models request schema, allowing more efficient and accurate completion.
* `Roles` have been introduced to provide greater flexibility in controlling the completion behaviour.
* `Max New Tokens` parameter has been introduced to specify the desired length of the generated text (in token length).

#### Placeholder reference and Prompt Library

* The table context menu now includes `Copy selected row(s) as JSON` and `Copy value from left-clicked cell` options.

#### Server

* Server outputs are now displayed using Burp Suite's `Raw Editor`, enabling improved search and integration with Burp Suite.
* A new `Export` button has been added to enable saving server logs to a file.

### UI/UX Improvements

* The navigation tabs are now larger, left-aligned, and feature an enhanced design.
* Icons have been enhanced and revamped.
* Prompt dialogs have been updated to better match the size of the content.
* Success and error colours have been updated to match the Burp Suite theme more closely.
* The `About` view layout has been redesigned to allow fluid content within the view.
* Standardised button labels for clarity.

***

## 0.4.8 (30/05/2024)

### Bug Fixes

* General bug fixes and performance improvements.

### Feature Improvements

#### Azure/OpenAI API

* Added support for the `gpt4-turbo` and `gpt4-o` models.

### UI/UX Improvements

* Introduced a direct link to the documentation website within the `About` view via a button.

***

## 0.4.7 (01/04/2024)

### Bug Fixes

* General bug fixes and performance improvements.
* Resolved an issue affecting the `Request timeout (seconds)` slider that was introduced in version `0.4.6`.

### Feature Improvements

#### Azure/OpenAI API and Local LLM

* Added an `Anonymise report` checkbox option that conceals the `API endpoint` and `API key` in the generated GPT-insight issues for enhanced privacy.
* Modified the default `Request timeout (seconds)` setting from `10` to `30` seconds.
* Setting `Max prompt length` to `0` ensures that no prompt truncation will take place.
* Inserted a clear warning message within the `Issue background` section of generated issues, cautioning users not to rely solely on the information provided but to conduct manual validation as well.

### UI/UX Improvements

* Following the introduction of new settings in versions `0.4.6` and the current release, the layout of the fields in the `Azure/OpenAI API` and `Local LLM` tabs has been reorganised for improved navigation and usability.
* Updated the `Send to` options in the BurpSuite contextual menu to ensure consistency in wording throughout the extension.

***

## 0.4.6 (04/03/2024)

### Bug Fixes

* General bug fixes and performance improvements.

### Feature Improvements

#### Azure/OpenAI API and Local LLM

* Introduced a `Request timeout (seconds)` slider to provide precise control over API call timeouts. This enhancement aims to cater to varying network conditions.

#### Prompt Library

* Resolved an issue causing the `Author` column's data to be mistakenly relayed to selected views instead of the intended `Prompt` column data when using the `Send to` contextual menu feature.

### UI/UX Improvements

* Updated the user interface layout within the `Prompt` and `Placeholder` tabs to maximise the table space.
* Added tooltips to the `Browse` button in both the `Local LLM` and `Server` tabs to improve accessibility.
* Updated the `Send to` options in the `Prompt library` 's table contextual menu to ensure consistency in wording throughout the extension.
* Introduced a Slider to the `Azure/OpenAI API` and `Local LLM` tabs for the aforementioned reasons.

***

## 0.4.5 (22/02/2024)

### Bug Fixes

* General bug fixes and performance improvements.
* Resolved issues associated with the extension's persistence store.

### Feature Improvements

#### Prompt Library

* Revamped the data structure in the library by incorporating additional columns:
  * **`#`:** Id of the prompt, serving its purpose in persistence - programmatically generated.
  * **`Author`:** Creator of the prompt.
  * **`Created`:** Timestamp indicating when the prompt was initially created - programmatically generated.
* The default sorting for the prompt table is now based on the `Category` column.

### UI/UX Improvements

* Implemented a hide-show toggle for the `Api endpoint` and `Api key` fields to enhance the security of screenshotting.
* Moved the `Browse` button for the `Model directory` field to the left, creating a more cohesive visual appearance.
* `Prompt Library`'s visual appearance has been updated to reflect the aforementioned changes.
* Implemented a tooltip for improved accessibility on the settings icon in the relevant views.
* In alignment with the recent updates to Burp Suite, all tables within the extension now support the ability to modify column visibility and sort columns. This can be achieved by right-clicking anywhere on a column header or by clicking the three dots to access the table options menu located at the top right corner of each table.

***

## 0.4.4 (29/01/2024)

### Bug Fixes

* General bug fixes and performance improvements.

### Feature Improvements

#### Azure OpenAI Services

* Resolved integration issues by disabling SSL certificate checks on the endpoint and revising the format of the API calls.

#### Local LLM

* Revamped the feature by eliminating the Node middleware and opting for direct integration with a Flask server to manage API calls, resulting in a reduced JAR size and improved performance.
* Enhanced logging functionality now provides visual feedback on model downloads and various interactions with the Hugging Face Model Hub.
* Implemented a `Python path` field, allowing users to directly specify the path of the Python binary. This caters to users whose machines are subject to security policies restricting access to the system PATH.

### UI/UX Improvements

* Updated the tab name from `OpenAI API` to `Azure/OpenAI API` for a more accurate representation of `Azure` support.
* Unified the color scheme by applying the Burp Orange color exclusively to call-to-action (CTA) buttons.
* Enabled auto-scrolling in the server debug view to facilitate smooth monitoring of server activities.
* Revamped the `Server` view as part of the changes introduced in the `Local LLM` feature.

***

## 0.4.3 (16/01/2024)

### Bug Fixes

* General bug fixes and performance improvements.
* Enhanced the persistence of settings, ensuring improved compatibility and upgradability across various versions.

### Feature Improvements

#### OpenAI API

* Deprecated models have been removed, making way for the introduction of new models, including GPT-4. For further details, refer to <https://platform.openai.com/docs/deprecations/>.
* The default API endpoint has been updated from <https://api.openai.com/v1/completions> to <https://api.openai.com/v1/chat/completions>. As a result, the `max_tokens` parameter has been replaced with `role`.

#### Local LLM

* Enhanced the underlying scripts by upgrading dependencies and eliminating unused ones, resulting in a reduced JAR size.

***

## 0.4.2 (07/01/2024)

### Bug Fixes

* General bug fixes and performance improvements.

### Feature Improvements

* Removed the read-only attribute from the `API Endpoint` field in the OpenAI API view, enabling users to define arbitrary endpoints, facilitating compatibility with Azure OpenAI Service.
* Implemented a context menu for the table in the `Prompt Library` view, providing options to directly send prompts to the `OpenAI API` and `Local LLM` views.

### Logging Enhancements

* Incorporated the latest Montoya changes to refine logging procedures and initiate alerts for specific events.

### UI/UX Improvements

* Revamped the `OpenAI API` and `Local LLM` views, placing a `Settings` menu next to the title with a `Restore defaults` option.
* Using Font Awesome icons as the primary icon source.
* Enhancing interactive icons with brightness on hover.
* Removed the `Docking` button as BurpSuite now inherently supports transitioning between tabs and floating tabs.

***

## 0.4.1 (05/06/2023)

### Bug Fixes

* General bug fixes and performance improvements.

***

## 0.4 (02/06/2023)

### Bug Fixes

* General bug fixes and performance improvements.

### Compatibility Enhancements

* Enhanced cross-platform compatibility, particularly for macOS.

### Settings Persistence

* Implemented settings persistence for the `OpenAI API` and `Local LLM` views.

### UI/UX Improvements

* Added a `Reset` button to the `OpenAI API` and `Local LLM` view, allowing users to easily restore the default values for the various fields.
* Unified the UI look and feel across the entire extension.

***

## 0.3 (25/05/2023)

### Bug Fixes

* Fixed a bug that prevented the display of license activation issues.
* General bug fixes and performance improvements.

### Compatibility Enhancements

* Enhanced cross-platform compatibility, particularly for macOS.

### Feature Improvements

* Incorporated pre-defined example prompts into the `Prompt Library` view.
* Improved logic for starting and stopping the local server to prevent lingering background processes upon unloading the extension.

### Model Compatibility

* Added compatibility for the following OpenAI models:
  * text-davinci-002
  * text-davinci-003
  * text-ada-001
  * text-babbage-001
  * text-curie-001

### Performance Improvements

* Implemented checks to ensure proper termination of the UI upon unloading the extension.

### Settings Persistence

* Refined the settings persistence logic.

### UI/UX Improvements

* Eliminated the `Apply` button to improve the user experience.
* Implemented listeners for each field to automatically track and apply setting modifications.
* Implemented tooltips for key fields to provide additional guidance.
* Unified the UI look and feel across the entire extension.

***

## 0.2.2 (22/05/2023)

### Bug Fixes

* Resolved a UI bug that caused the `Parameter #` field in the `Local LLM` view to remain enabled even when text was entered in the `LLM directory` field.
* General bug fixes and performance improvements.

### Logging Enhancements

* Enhanced the logging mechanism to facilitate streamlined troubleshooting of the extension.

***

## 0.2.1 (19/05/2023)

### Compatibility Enhancements

* Resolved a bug that caused compatibility issues with Python, which resulted in the Local LLM feature not functioning correctly.

### General Improvements

* Reduced the size of the BurpGPT Pro JAR by unshipping Python dependencies, resulting in a substantial decrease from 226.3MB to 2.5MB. Users are now required to configure their systems accordingly to accommodate this change. For the most up-to-date installation instructions, please visit [With Hugging Face](/how-to/use-supported-local-model-providers/with-hugging-face.md).

***

## 0.2 (16/05/2023)

### Bug Fixes

* Fixed a visual bug that caused the prompt text field to collapse when using a font size larger than 12.

### Performance Improvements

* Improved local model processing performance by using Hugging Face's text-generation pipelines abstraction.
* Enhanced application stability and reliability by adding extra exception handling.

### UI/UX Improvements

* Implemented a `Dock` button for transitioning between a floating tab and a suite tab, offering enhanced flexibility in the extension's use.

***

## 0.1.1 (11/05/2023)

### General Improvements

* This update brings important bug fixes and boosts in performance.

***

## 0.1 (07/05/2023)

### General Improvements

* We are thrilled to bring you the first release of BurpGPT Pro, packed with powerful new features.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.burpgpt.app/help-and-faq/changelog.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
