With Hugging Face

This page outlines the steps involved in using the Local LLM integration.

  1. Go to the Server tab.

  2. Start the server by clicking the Start server button. The initial launch may take some time, so please wait until the message Server is running on port <PORT> appears. You can view the server status, including the PID of the running process, at the bottom of the view.

The local server powers the local LLM capabilities of BurpGPT Pro, and all computations are made locally, ensuring complete data privacy of your prompts and HTTP traffic.

  1. In scenarios with restricted system PATH access, manually providing the Python executable's absolute path in the designated Python path field ensures the local server's initiation. If left blank, the system PATH will be used for automatic Python binary detection.

  2. Switch to the Local LLM tab and select one of the pre-built models from the Model dropdown field. The associated number of datapoints used to train the model is displayed under the Model size field.

Keep in mind that the larger the number of datapoints used to train a model, the larger the resulting model size will be. In some cases, the model size can be in the gigabytes range, which may impact processing time for your queries.

When selecting models on the Hugging Face hub, it is recommended to choose instruct models, typically suffixed with it or instruct. These models work best with BurpGPT Pro. The built-in list includes examples from models provided by Google, Meta, Microsoft, and the OpenAI Community.

  1. To optimise the performance of your local model, set the Max prompt length and Max token length parameters appropriately. By adjusting these parameters, you can optimise the amount of information you can provide to the model and achieve the desired length of the response.

    • Max prompt length: determines the maximum size of your prompt once the placeholders have been replaced.

    • Max token length: specifies the maximum length allowed for both the prompt and the model response. This variable depends on the model type and technology. For instance, GPT-2-based models usually have a max token length of 1,024, while GPT-3-based models have a larger value of 2,048.

Last updated

Was this helpful?