You can run AI models from the Hub locally on your machine. This means that you can benefit from these advantages:
Local apps are applications that can run Hugging Face models directly on your machine. To get started:

app in the Other section of the navigation bar:


The best way to check if a local app is supported is to go to the Local Apps settings and see if the app is listed. Here is a quick overview of some of the most popular local apps:
👨💻 To use these local apps, copy the snippets from the model card as above.
👷 If you’re building a local app, you can learn about integrating with the Hub in this guide.
Llama.cpp is a high-performance C/C++ library for running LLMs locally with optimized inference across lots of different hardware, including CPUs, CUDA and Metal.
Advantages:
To use Llama.cpp, navigate to the model card and click “Use this model” and copy the command.
# Load and run the model:
./llama-server -hf unsloth/gpt-oss-20b-GGUF:Q4_K_MOllama is an application that lets you run large language models locally on your computer with a simple command-line interface.
Advantages:
To use Ollama, navigate to the model card and click “Use this model” and copy the command.
ollama run hf.co/unsloth/gpt-oss-20b-GGUF:Q4_K_M
Jan is an open-source ChatGPT alternative that runs entirely offline with a user-friendly interface.
Advantages:
To use Jan, navigate to the model card and click “Use this model”. Jan will open and you can start chatting through the interface.
LM Studio is a desktop application that provides an easy way to download, run, and experiment with local LLMs.
Advantages:
Navigate to the model card and click “Use this model”. LM Studio will open and you can start chatting through the interface.
Update on GitHub