# With the Ollama provider

{% hint style="warning" %}
To use this feature, you must first [install Ollama](https://ollama.com/download) on your machine and ensure that the models you want to query are already downloaded and available locally. For detailed information on installing `Ollama` and managing models, please refer to the [official Ollama documentation](https://github.com/ollama/ollama/tree/main/docs).
{% endhint %}

To configure and use the `Ollama` model provider, follow these steps:

1. Go to the `Provider settings` tab.
2. Select `Ollama` from the `Model provider` dropdown.
3. (Optional) The `Base URL` field defaults to `Ollama`’s address: `http://localhost:11434`. If your `Ollama` instance runs on a different host or port, update this field accordingly.
4. In the `Model` field, enter the name of a model already installed in your `Ollama` environment. To list available models, run:

{% code fullWidth="false" %}

```powershell
ollama list
```

{% endcode %}

5. (Optional) Adjust the `JSON`-formatted settings in the `Request parameters` field to fine-tune the model’s completion behaviour. Refer to the [detailed list of valid parameters and their accepted values](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) for more information.
6. Set your `prompt`, then either send requests for individual scans or enable processing for all active scans.
