# How To

- [Use supported cloud-based model providers](/how-to/use-supported-cloud-based-model-providers.md): This page outlines the steps for using the integration with supported cloud-based AI model providers.
- [Use supported local model providers](/how-to/use-supported-local-model-providers.md)
- [With Hugging Face](/how-to/use-supported-local-model-providers/with-hugging-face.md): This page outlines the steps involved in using the Local LLM integration.
- [With the Ollama provider](/how-to/use-supported-local-model-providers/with-the-ollama-provider.md): This page outlines the steps involved in using the Ollama provider.
- [Use the prompt library](/how-to/use-the-prompt-library.md): This page outlines the features provided by the prompt library.
- [Analyse HTTP traffic](/how-to/analyse-http-traffic.md): This page outlines the methods available to analyse HTTP traffic.
- [View GPT-generated results](/how-to/view-gpt-generated-results.md): This page outlines the presentation and structure of GPT-generated results.
- [Test and validate model provider settings](/how-to/test-and-validate-model-provider-settings.md): This page outlines how to test your configured settings for a model provider, with additional details specific to using the Hugging Face provider.
