# Inference-Endpoints

## Docs

- [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index.md)
- [About Inference Endpoints](https://huggingface.co/docs/inference-endpoints/about.md)
- [Quick Start](https://huggingface.co/docs/inference-endpoints/quick_start.md)
- [API Reference (Swagger)](https://huggingface.co/docs/inference-endpoints/api_reference.md)
- [Text Embeddings Inference (TEI)](https://huggingface.co/docs/inference-endpoints/engines/tei.md)
- [Text Generation Inference (TGI)](https://huggingface.co/docs/inference-endpoints/engines/tgi.md)
- [Inference Toolkit](https://huggingface.co/docs/inference-endpoints/engines/toolkit.md)
- [Deploy with your own container](https://huggingface.co/docs/inference-endpoints/engines/custom_container.md)
- [SGLang](https://huggingface.co/docs/inference-endpoints/engines/sglang.md)
- [llama.cpp](https://huggingface.co/docs/inference-endpoints/engines/llama_cpp.md)
- [vLLM](https://huggingface.co/docs/inference-endpoints/engines/vllm.md)
- [Configuration](https://huggingface.co/docs/inference-endpoints/guides/configuration.md)
- [Runtime Logs](https://huggingface.co/docs/inference-endpoints/guides/logs.md)
- [Analytics and Metrics](https://huggingface.co/docs/inference-endpoints/guides/analytics.md)
- [Security & Compliance](https://huggingface.co/docs/inference-endpoints/guides/security.md)
- [Foundations](https://huggingface.co/docs/inference-endpoints/guides/foundations.md)
- [Create a Private Endpoint with AWS PrivateLink](https://huggingface.co/docs/inference-endpoints/guides/private_link.md)
- [Autoscaling](https://huggingface.co/docs/inference-endpoints/guides/autoscaling.md)
- [FAQs](https://huggingface.co/docs/inference-endpoints/support/faq.md)
- [Pricing](https://huggingface.co/docs/inference-endpoints/support/pricing.md)
- [Build an embedding pipeline with datasets](https://huggingface.co/docs/inference-endpoints/tutorials/embedding.md)
- [Create your own transcription app](https://huggingface.co/docs/inference-endpoints/tutorials/transcription.md)
- [Build and deploy your own chat application](https://huggingface.co/docs/inference-endpoints/tutorials/chat_bot.md)

### Inference Endpoints
https://huggingface.co/docs/inference-endpoints/index.md

# Inference Endpoints

<div class="flex justify-center">
    <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hf-endpoints/inference-endpoint-doc-thumbnail-light.png"/>
    <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hf-endpoints/inference-endpoint-doc-thumbnail-dark.png"/>
</div>

Inference Endpoints is a managed service to deploy your AI model to production.
Here you'll find quickstarts, guides, tutorials, use cases and a lot more.

<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
  
  <a
    class="!no-underline pb-8 pr-4 block rounded-xl border border-gray-200 dark:border-gray-800 bg-gradient-to-br from-blue-50 to-white dark:from-gray-900 dark:to-gray-800 hover:shadow-xl hover:-translate-y-1 transition-all leading-none flex flex-col h-full"
    href="./quick_start"
    >
    <h3 class="font-semibold text-gray-900 dark:text-white mb-1 leading-none pt-4 mt-0 pl-4">🔥 Quickstart</h3>
    <p class="text-sm text-gray-600 dark:text-gray-400 leading-snug pl-4 flex-grow">
      Deploy a production ready AI model in minutes.
    </p>
  </a>

  <a 
    class="!no-underline pb-8 pr-4 block rounded-xl border border-gray-200 dark:border-gray-800 bg-gradient-to-br from-indigo-50 to-white dark:from-gray-900 dark:to-gray-800 hover:shadow-xl hover:-translate-y-1 transition-all leading-none flex flex-col h-full"
    href="./about"
    >
    <h3 class="font-semibold text-gray-900 dark:text-white mb-1 leading-none pt-4 mt-0 pl-4">🔍 How Inference Endpoints Works</h3>
    <p class="text-sm text-gray-600 dark:text-gray-400 leading-snug pl-4 flex-grow">
      Understand the main components and benefits of Inference Endpoints.
    </p>
  </a>

  <a 
    class="!no-underline pb-8 pr-4 block rounded-xl border border-gray-200 dark:border-gray-800 bg-gradient-to-br from-red-50 to-white dark:from-gray-900 dark:to-gray-800 hover:shadow-xl hover:-translate-y-1 transition-all leading-none flex flex-col h-full"
    href="./guides/foundations"
    >
    <h3 class="font-semibold text-gray-900 dark:text-white mb-1 leading-none pt-4 mt-0 pl-4">📖 Guides</h3>
    <p class="text-sm text-gray-600 dark:text-gray-400 leading-snug pl-4 flex-grow">
      Explore our guides to learn how to configure or enable specific features on the platform.
    </p>
  </a>

  <a
    class="!no-underline pb-8 pr-4 block rounded-xl border border-gray-200 dark:border-gray-800 bg-gradient-to-br from-green-50 to-white dark:from-gray-900 dark:to-gray-800 hover:shadow-xl hover:-translate-y-1 transition-all leading-none flex flex-col h-full"
    href="./tutorials/chat_bot"
    >
    <h3 class="font-semibold text-gray-900 dark:text-white mb-1 leading-none pt-4 mt-0 pl-4">🧑‍💻 Tutorials</h3>
    <p class="text-sm text-gray-600 dark:text-gray-400 leading-snug pl-4 flex-grow">
      Step-by-step guides on common developer scenarios.
    </p>
  </a>
</div>

## Why use Inference Endpoints

Inference Endpoints makes deploying AI models to production a smooth experience. Instead of spending weeks configuring infrastructure, managing servers, and debugging deployment issues, you can focus on what matters most: your model and your users.

Our platform eliminates the complexity of AI infrastructure while providing enterprise-grade features that scale with your business needs. Whether you're a startup launching your first AI product or an enterprise team managing hundreds of models, Inference Endpoints provides the reliability, performance, and cost-efficiency you need.

**Key benefits include:**
- ⬇️ **Reduce operational overhead**: Eliminate the need for dedicated DevOps teams and infrastructure management, letting you focus on innovation.
- 🚀 **Scale with confidence**: Handle traffic spikes automatically without worrying about capacity planning or performance degradation.
- ⬇️ **Lower total cost of ownership**: Avoid the hidden costs of self-managed infrastructure including maintenance, monitoring, and security compliance.
- 💻  **Future-proof your AI stack**: Stay current with the latest frameworks and optimizations without managing complex upgrades.
- 🔥 **Focus on what matters**: Spend your time improving your models and building great user experiences, not managing servers.

## Key Features 
- 📦 **Fully managed infrastructure**: you don't need to worry about things like kubernetes, CUDA versions and configuring VPNs. Inference Endpoints deals with this under the hood so you can focus on deploying your model and serving customers as fast as possible.
- ↕️ **Autoscaling**: as there's more traffic to your model you'll need more firepower as well. Your Inference Endpoint scales up as traffic increases and down as it decreases to save you on unnecessary compute cost. 
- 👀 **Observability**: understand and debug what's going on in your model through logs & metrics.
- 🔥 **Integrated support for open-source Inference Engines**: Whether you want to deploy your model with vLLM, TGI or a custom container, we got you!
- 🤗 **Seamless integration with the Hugging Face Hub**: Downloading model weights fast and with the correct security policies is paramount when bringing an AI model to production. With Inference Endpoints, it's easy and safe.


## Further Reading

If you're considering using Inference Endpoints in production, read these two case studies:
- [Why we're switching to Hugging Face Inference Endpoints, and maybe you should too](https://huggingface.co/blog/mantis-case-study)
- [Investing in Performance: Fine-tune small models with LLM insights - a CFM case study](https://huggingface.co/blog/cfm-case-study)

You might also find these blogs helpful:
- [🤗 LLM suggestions in Argilla with HuggingFace Inference Endpoints](https://huggingface.co/blog/alvarobartt/argilla-suggestions-via-inference-endpoints)
- [Programmatically manage Inference Endpoints](https://www.philschmid.de/inference-endpoints-iac)
- [TGI Multi-LoRA: Deploy Once, Serve 30 models](https://huggingface.co/blog/multi-lora-serving)
- [Llama 3.1 - 405B, 70B & 8B with multilinguality and long context](https://huggingface.co/blog/llama31#hugging-face-inference-endpoints)
- [Deploy MusicGen in no time with Inference Endpoints](https://huggingface.co/blog/run-musicgen-as-an-api)

Or try out the [Quick Start](./quick_start)!



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/index.md" />

### About Inference Endpoints
https://huggingface.co/docs/inference-endpoints/about.md

# About Inference Endpoints

Inference Endpoints is a managed service to deploy your AI model to production. The infrastructure is managed and configured such that
you can focus on building your AI application. 

To get an AI model into production, you need three key components:

1. **Model Weights and Artifacts**: These are the trained parameters and files that define your AI model, stored and versioned on the
Hugging Face Hub.

2. **Inference Engine**: This is the software that loads and runs your model to generate predictions. Popular engines include vLLM, TGI, and
others, each optimized for different use cases and performance needs.

3. **Production Infrastructure**: This is what Inference Endpoints is. A scalable, secure, and reliable environment where your model runs—handling
requests, scaling with demand, and ensuring uptime.

Inference Endpoints brings all these pieces together into a single managed service. You choose your model from the Hub, select the
inference engine, and Inference Endpoints takes care of the rest—provisioning infrastructure, deploying your model, and making it
accessible via a simple API. This lets you focus on building your application, while we handle the complexity of production AI deployment.

![about](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/about.png)

## Inference Engines

To achieve that we've made Inference Endpoints the central place to deploy high performance and open-source Inference Engines.

Currently we have native support for:
- vLLM
- Text-generation-inference (TGI)
- SGLang
- llama.cpp
- and Text-embeddings-inference (TEI)

For the natively supported engines we try to set sensible defaults, expose the most relevant configuration settings and collaborate closely
with the teams maintaining the Inference Engines to make sure they are optimized for production performance.

If you don't find your favourite engine here, please reach out to us at [api-enterprise@huggingface.co](api-enterprise@huggingface.co).

## Under the Hood

When you deploy an Inference Endpoint, under the hood your selected inference engine (like vLLM, TGI, SGLang, etc.) is packaged
and launched as a prebuilt Docker container. This container includes the inference engine software, your chosen model
weights and artifacts (downloaded directly from the Hugging Face Hub), and any configuration or environment variables you specify.

We manage the full lifecycle of these containers: starting, stopping, scaling (including autoscaling and scale-to-zero),
and monitoring them for health and performance. This orchestration is completely managed for you, so you don't have to worry about
the complexities of containerization, networking, or cloud resource management.

## Enterprise or Team Subscription

For more features consider subscribing to [Team or Enterprise](https://huggingface.co/enterprise).

It gives your organization more control over access controls, dedicated support and more. Features include:
- Higher quotas for the most performant GPUs
- Single Sign-on (SSO)
- Access to Audit Logs
- Manage teams and projects access controls with Resource Groups
- Private storage for your repositories
- Disable the ability to create public repositories (or make repositories private by default)
- You can request a quote for a contract-based-invoice which allows for more payment options + prepaid credits
- and more! 


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/about.md" />

### Quick Start
https://huggingface.co/docs/inference-endpoints/quick_start.md

# Quick Start

In this guide you'll deploy a production ready AI model using Inference Endpoints in only a few minutes.
Make sure you've been able to log into the [Inference Endpoints UI](https://endpoints.huggingface.co) with your Hugging Face account, and that you have a payment
method setup. If not, it's a quick add of valid payment method in your [billing settings](https://huggingface.co/settings/billing).

## Create your endpoint

Start by navigating to the Inference Endpoints UI, and once you're logged in, you should see a button for creating a new Inference
Endpoint. Click the "New" button.

![new-button](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/1-new-button.png)

From there you'll be directed to the catalog. The Model Catalog consists of popular models which have tuned configurations to work in one-click
deploys. You can filter by name, task, hardware price, and much more.

![catalog](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/2-catalog.png)

In this example let's deploy the [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model. You can find
it by searching for `llama-3.2-3b` in the search field and deploy it by clicking the card.

![llama](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/3-llama.png)

Next we'll choose which hardware and deployment settings we'll go for. Since this is a catalog model, all of the pre-selected options are very good
defaults. So in this case we don't need to change anything. In case you want a deeper dive on what the different settings mean you can check out
the [configuration guide](./guides/configuration).

For this model the Nvidia L4 is the recommended choice. It will be perfect for our testing. Performant but still reasonably priced. Also note that by
default the endpoint will scale down to zero, meaning it will become idle after 1h of inactivity.

Now all you need to do is click click "Create Endpoint" 🚀

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/4-config.png)

Now our Inference Endpoint is initializing, which usually takes about 3-5 minutes. If you want to can allow browser notifications which will give you a
ping once the endpoint reaches a running state.

![init](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/5-init.png)

## Test your Inference Endpoint

And then once everything is up and running you'll be able to see the:
- **Endpoint URL**: this is what you use to call your endpoint and send requests to it
- **Playground**: a small visual way of quickly testing that the model works

![done](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/6-done.png)

From the side of the playground you can also copy + paste a code snippet for calling the model. By clicking "App Tokens" you'll be directed to Hugging Face
to configure an access token to be able to call the model. By default, all Inference Endpoints are created as private which require authentication and
all data is encryped in transit using TLS/SSL.

Congratulations, you just deployed a production ready AI model in Inference Endpoints 🔥

Once you're happy with the testing you can pause the Inference Endpoint, delete it. Or if you let it be, it will scale to zero after 1 hour.



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/quick_start.md" />

### API Reference (Swagger)
https://huggingface.co/docs/inference-endpoints/api_reference.md

# API Reference (Swagger)

Inference Endpoints can be used through the [UI](https://endpoints.huggingface.co/endpoints) and programmatically through an API.
Here you'll find the [open-API specification](https://api.endpoints.huggingface.cloud/) for each available route, which you can call directly,
or through the [Hugging Face Hub python client](https://huggingface.co/docs/huggingface_hub/guides/inference_endpoints).

<iframe src="https://api.endpoints.huggingface.cloud/"  style='height: 60vh; width: 100%;' frameborder="0" id="iframe">Browser not compatible.</iframe>



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/api_reference.md" />

### Text Embeddings Inference (TEI)
https://huggingface.co/docs/inference-endpoints/engines/tei.md

# Text Embeddings Inference (TEI)

Text Embeddings Inference (TEI) is a robust, production-ready engine designed for fast and efficient generation of text
embeddings from a wide range of models. Built for scalability and reliability, TEI streamlines the deployment
of embedding models for search, retrieval, clustering, and semantic understanding tasks.

Key Features:
- **Efficient Resource Utilization**: Benefit from small Docker images and rapid boot times.
- **Dynamic Batching**: TEI incorporates token-based dynamic batching thus optimizing resource utilization during inference.
- **Optimized Inference**: TEI leverages Flash Attention, Candle, and cuBLASLt by using optimized transformers code for inference.
- **Support for models** in both the Safetensors and ONNX format
- **Production-Ready**: TEI supports distributed tracing through Open Telemetry and exports Prometheus metrics.

## Configuration

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tei/tei.png)

- **Max Tokens (per batch)**: Number of tokens that can be added to a batch before forcing queries to wait in the internal queue. 
- **Max Concurrent Requests**: The maximum number of requests that the server can handle at once.
- **Pooling**: Setting to override the model pooling configuration. Default is not to override the model configuration.

## Supported models

You can find the models that are supported by TGI by either:
- Browse supported models on the [Hugging Face Hub](https://huggingface.co/models?other=text-embeddings-inference&sort=trending)
- In the TEI documentation under the [supported models](https://huggingface.co/docs/text-embeddings-inference/supported_models) section

## References

We also recommend reading the [TEI documentation](https://huggingface.co/docs/text-embeddings-inference/index) for more in-depth information.


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/tei.md" />

### Text Generation Inference (TGI)
https://huggingface.co/docs/inference-endpoints/engines/tgi.md

# Text Generation Inference (TGI)

TGI is a production-grade inference engine built in Rust and Python, designed for high-performance
serving of open-source LLMs (e.g. LLaMA, Falcon, StarCoder, BLOOM and many more).
The core features that make TGI a good choice are:
- **Continuous batching + streaming**: Dynamically groups in-flight requests and streams tokens via Server-Sent Events (SSE)
- **Optimized attention & decoding**: TGI uses Flash Attention, Paged Attention, KV-caching, and custom CUDA kernels for latency and memory efficiency
- **Quantization & weight loading speed**: Supports quantizations methods like bitsandbytes and GPTQ and uses Safetensors to reduce load times
- **Production readiness**: Fully OpenAI-compatible `/v1/chat` or `/v1/completions` APIs, Prometheus metrics, OpenTelemetry tracing, watermarking, logit controls, JSON schema guidance

By default, the TGI version will be the latest available one (with some delay). But you can also specify a different version by [changing
the container URL](https://raw.githubusercontent.com/not-here)

## Configuration

When selecting a model to deploy, the Inference Endpoints UI automatically checks whether a model is supported by TGI. If it is, you'll see
the option presented under `Container Configuration` where you can change the following settings:

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tgi/tgi_config.png)

- **Quantization**: Which quantization method, if any, to use for the model.
- **Max Number of Tokens (per query)**: Changes the maximum amount of tokens a request can contain.
For example a value of `1512` means users can send either a prompt of `1000` tokens and generate `512` new tokens,
or send a prompt of `1` token and generate `1511` new tokens. The larger this value, the larger amount each request
will be in your RAM and the less effective batching can be. 
- **Max Input Tokens (per query)**: The maximum number of input tokens, meaning the amount of tokens in the prompt. 
- **Max Batch Prefill Tokens**: Limits the number of tokens for the prefill operation. Prefill tokens are the ones sent in with the user prompt. 
- **Max Batch Total Tokens**: This changes the total amount of potential tokens within a batch. Together with `Max Number of Tokens`,
this determines how many concurrent requests you can serve. If you set `Max Number of Tokens` to 100 and `Max Batch Total Tokens` to 100 as well,
you can only serve one request at a time.

In general zero-configuration (see below) is recommended for most cases. TGI supports several other configuration parameters and you'll find a complete list
in the [TGI documentation](https://huggingface.co/docs/text-generation-inference/reference/launcher#text-generation-launcher-arguments). These can all be
set by passing the values as environment variables to the container, [link to guide](https://huggingface.co/no-link-yet).

## Zero configuration
Introduced in TGI v3, the zero-config mode helps you get the most out of your hardware without manual configuration and trial & error.
If you leave the values undefined, TGI will on server startup automatically (based on the hardware it's running on) select the maximal possible values
for the max input lenght, max number of tokens, max batch prefill tokens and max batch total tokens. This means that you'll use your hardware to it's full capacity.

<Tip>
Note that there's a caveat: say you're deploying `meta-llama/Llama-3.3-70B-Instruct`, which has a context length of 128k tokens.
But you're on a GPU where you can only fit the model's context three times in memory. So if you want to serve the model with full context length,
you can only serve up to 3 concurrent requests. In some cases, it's fine to drop the maximum context length to 64k tokens, which would
allow the server to process 6 concurrent requests.
You can configure this by setting max input length to 64k and then let TGI auto-configure the rest.
</Tip>

## Supported models

You can find the models that are supported by TGI:
- Browse supported models on the [Hugging Face Hub](https://huggingface.co/models?apps=tgi&sort=trending)
- In the TGI documentation under the [supported models](https://huggingface.co/docs/text-generation-inference/supported_models) section
- A selection of popular models in the [Inference Endpoints Catalog](https://endpoints.huggingface.co/huggingface/catalog)

If a model is supported by TGI, the Inference Endpoints UI will indicate this by disabling/enabling the selection under `Container Type` configuration.
![selection](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tgi/tgi_selection.png)

## References

We also recommend reading the [TGI documentation](https://huggingface.co/docs/text-generation-inference) for more in-depth information.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/tgi.md" />

### Inference Toolkit
https://huggingface.co/docs/inference-endpoints/engines/toolkit.md

# Inference Toolkit

In some cases, the model you're looking to deploy isn't supported by any of the high-performance inference engines. In this case,
we provide a fallback option. The Inference Toolkit supports models that are implemented in the
Transformers, Sentence-Transformers and Diffusers libraries, and wraps them in a light web server.

The Inference Toolkit is perfect for testing models and building demos, but isn't as production-ready as TGI, vLLM, SGLang, or llama.cpp.


# Create a custom Inference Handler

Hugging Face Endpoints supports all of the Transformers and Sentence-Transformers tasks and can support custom tasks, including
custom pre- & post-processing. The customization can be done through a
[handler.py](https://huggingface.co/philschmid/distilbert-onnx-banking77/blob/main/handler.py) file in your model repository on
the Hugging Face Hub.

The [handler.py](https://huggingface.co/philschmid/distilbert-onnx-banking77/blob/main/handler.py) needs to implement
the [EndpointHandler](https://huggingface.co/philschmid/distilbert-onnx-banking77/blob/main/handler.py) class with a
`__init__` and a `__call__` method.

If you want to use custom dependencies, e.g. [optimum](https://raw.githubusercontent.com/huggingface/optimum), the dependencies must
be listed in a `requirements.txt` as described above in “add custom dependencies.”

## Tutorial

Before creating a Custom Handler, you need a Hugging Face Model repository with your model weights and an Access Token with
_write_ access to the repository. To find, create and manage Access Tokens, click [here](https://huggingface.co/settings/tokens).

If you want to write a Custom Handler for an existing model from the community, you can use the [repo_duplicator](https://huggingface.co/spaces/osanseviero/repo_duplicator)
to create a repository fork.

The code can also be found in this [Notebook](https://colab.research.google.com/drive/1hANJeRa1PK1gZaUorobnQGu4bFj4_4Rf?usp=sharing).

You can also search for already existing Custom Handlers here: [https://huggingface.co/models?other=endpoints-template](https://huggingface.co/models?other=endpoints-template)

### 1. Set up Development Environment

The easiest way to develop our custom handler is to set up a local development environment, to implement, test, and iterate there, and then
deploy it as an Inference Endpoint. The first step is to install all required development dependencies. _needed to create the custom
handler, not needed for inference_

```
# install git-lfs to interact with the repository
sudo apt-get update
sudo apt-get install git-lfs
# install transformers (not needed since it is installed by default in the container)
pip install transformers[sklearn,sentencepiece,audio,vision]
```

After we have installed our libraries we will clone our repository to our development environment.

We will use [philschmid/distilbert-base-uncased-emotion](https://huggingface.co/philschmid/distilbert-base-uncased-emotion) during the
tutorial.

```
git lfs install
git clone https://huggingface.co/philschmid/distilbert-base-uncased-emotion
```

To be able to push our model repo later you need to login into our HF account. This can be done by using the `huggingface-cli`.

_Note: Make sure to configure git config as well._

```
# setup cli with token
huggingface-cli login
git config --global credential.helper store
```

### 2. Create EndpointHandler

After we have set up our environment, we can start creating your custom handler. The custom handler is a Python class
(`EndpointHandler`) inside a `handler.py` file in our repository. The `EndpointHandler` needs to implement an `__init__` and a
`__call__` method.

- The `__init__` method will be called when starting the Endpoint and will receive 1 argument, a string with the path to your model
weights. This allows you to load your model correctly.
- The `__call__` method will be called on every request and receive a dictionary with your request body as a python dictionary.
It will always contain the `inputs` key.

The first step is to create our `handler.py` in the local clone of our repository.

```
!cd distilbert-base-uncased-emotion && touch handler.py
```

In there, you define your `EndpointHandler` class with the `__init__` and `__call__ `method.

```python
from typing import Dict, List, Any

class EndpointHandler():
    def __init__(self, path=""):
        # Preload all the elements you are going to need at inference.
        # pseudo:
        # self.model= load_model(path)

    def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
        """
       data args:
            inputs (:obj: `str` | `PIL.Image` | `np.array`)
            kwargs
      Return:
            A :obj:`list` | `dict`: will be serialized and returned
        """

        # pseudo
        # self.model(input)
```

### 3. Customize EndpointHandler

Now, you can add all of the custom logic you want to use during initialization or inference to your Custom Endpoint. You can
already find multiple [Custom Handlers on the Hub](https://huggingface.co/models?other=endpoints-template) if you need some
inspiration. In our example, we will add a custom condition based on additional payload information.

*The model we are using in the tutorial is fine-tuned to detect emotions. We will add an additional payload field for the date, and
will use an external package to check if it is a holiday, to add a condition so that when the input date is a holiday, the model
returns “happy” - since everyone is happy when there are holidays *🌴🎉😆 

First, we need to create a new `requirements.txt` and add our [holiday detection package](https://pypi.org/project/holidays/) and make
sure we have it installed in our development environment as well.

```
!echo "holidays" >> requirements.txt
!pip install -r requirements.txt
```

Next, we have to adjust our `handler.py` and `EndpointHandler` to match our condition.

```python
from typing import Dict, List, Any
from transformers import pipeline
import holidays

class EndpointHandler():
    def __init__(self, path=""):
        self.pipeline = pipeline("text-classification",model=path)
        self.holidays = holidays.US()

    def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
        """
       data args:
            inputs (:obj: `str`)
            date (:obj: `str`)
      Return:
            A :obj:`list` | `dict`: will be serialized and returned
        """
        # get inputs
        inputs = data.pop("inputs",data)
        date = data.pop("date", None)

        # check if date exists and if it is a holiday
        if date is not None and date in self.holidays:
          return [{"label": "happy", "score": 1}]


        # run normal prediction
        prediction = self.pipeline(inputs)
        return prediction
```

### 4. Test EndpointHandler

To test our EndpointHandler, we can simplify import, initialize and test it. Therefore we only need to prepare a sample payload.

```python
from handler import EndpointHandler

# init handler
my_handler = EndpointHandler(path=".")

# prepare sample payload
non_holiday_payload = {"inputs": "I am quite excited how this will turn out", "date": "2022-08-08"}
holiday_payload = {"inputs": "Today is a tough day", "date": "2022-07-04"}

# test the handler
non_holiday_pred=my_handler(non_holiday_payload)
holiday_pred=my_handler(holiday_payload)

# show results
print("non_holiday_pred", non_holiday_pred)
print("holiday_pred", holiday_pred)

# non_holiday_pred [{'label': 'joy', 'score': 0.9985942244529724}]
# holiday_pred [{'label': 'happy', 'score': 1}]
```

It works!!!! 🎉

_Note: If you are using a notebook you might have to restart your kernel when you make changes to the handler.py since it is not
automatically re-imported._

### 5. Push the Custom Handler to your repository

After you have successfully tested your handler locally, you can push it to your repository by simply using basic git commands.

```
# add all our new files
!git add *
# commit our files
!git commit -m "add custom handler"
# push the files to the hub
!git push
```

Now, you should see your `handler.py` and `requirements.txt` in your repository in the
[“Files and version”](https://huggingface.co/philschmid/distilbert-base-uncased-emotion/tree/main) tab.

### 6. Deploy your Custom Handler as an Inference Endpoint

The last step is to deploy your Custom Handler as an Inference Endpoint. You can deploy your Custom Handler like you would a regular
Inference Endpoint. Add your repository, select your cloud and region, your instance and security setting, and deploy.

When creating your Endpoint, the Inference Endpoint Service will check for an available and valid `handler.py`, and will use it for
serving requests no matter which “Task” you select.

_Note: In your [Inference Endpoints dashboard](https://ui.endpoints.huggingface.co/), the Task for this Endpoint should now be set
to Custom_

## Examples

There are a few examples on the [Hugging Face Hub](https://huggingface.co/models?other=endpoints-template) from where you can take
inspiration or directly use them. The repositories are tagged with `endpoints-template` and can be found under this
[link](https://huggingface.co/models?other=endpoints-template).

You'll find examples are for:

* [Optimum and ONNX Runtime](https://huggingface.co/philschmid/distilbert-onnx-banking77)
* [Image Embeddings with BLIP](https://huggingface.co/florentgbelidji/blip_image_embeddings)
* [TrOCR for OCR Detection](https://huggingface.co/philschmid/trocr-base-printed)
* [Optimized Sentence Transformers with Optimum](https://huggingface.co/philschmid/all-MiniLM-L6-v2-optimum-embeddings)
* [Pyannote Speaker diarization](https://huggingface.co/philschmid/pyannote-speaker-diarization-endpoint)
* [LayoutLM](https://huggingface.co/philschmid/layoutlm-funsd)
* [Flair NER](https://huggingface.co/philschmid/flair-ner-english-ontonotes-large)
* [GPT-J 6B Single GPU](https://huggingface.co/philschmid/gpt-j-6B-fp16-sharded)
* [Donut Document understanding](https://huggingface.co/philschmid/donut-base-finetuned-cord-v2)
* [SetFit classifier](https://huggingface.co/philschmid/setfit-ag-news-endpoint)


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/toolkit.md" />

### Deploy with your own container
https://huggingface.co/docs/inference-endpoints/engines/custom_container.md

# Deploy with your own container

If the available Inference Engines don't meet your requirements, you can deploy your own custom solution as a Docker container and run
it on Inference Endpoints. You can use public images like `tensorflow/serving:2.7.3` or private images hosted on
[Docker Hub](https://hub.docker.com/), [AWS ECR](https://aws.amazon.com/ecr/?nc1=h_ls),
[Azure ACR](https://azure.microsoft.com/de-de/services/container-registry/), or [Google GCR](https://cloud.google.com/container-registry?hl=de).

![custom container](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/custom_container/custom-container.png)

The [creation flow](/docs/inference-endpoints/guides/create_endpoint) of your image artifacts from a custom image is the same as the
base image. This means Inference Endpoints will create a unique image artifact derived from your provided image, including all model
artifacts. 

The model artifacts (weights) are stored under `/repository`. For example, if you use `tensorflow/serving` as your custom image,
then you have to set `model_base_path="/repository":

```
tensorflow_model_server \
  --rest_api_port=5000 \
  --model_name=my_model \
  --model_base_path="/repository"
```


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/custom_container.md" />

### SGLang
https://huggingface.co/docs/inference-endpoints/engines/sglang.md

# SGLang

SGLang is a fast serving framework for large language models and vision language models. It's very similar to TGI and vLLM and comes
with production ready features.

The core features include:
- **Fast Backend Runtime**:
    - efficient serving with RadixAttention for prefix caching
    - zero-overhead CPU scheduler
    - continuous batching, paged attention, tensor parallelism and pipeline parallelism,
    - expert parallelism, structured outputs, chunked prefill, quantization (FP8/INT4/AWQ/GPTQ), and multi-lora batching

- **Extensive Model Support**: Supports a wide range of generative models (Llama, Gemma, Mistral, Qwen, DeepSeek, LLaVA, etc.),
embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.

## Configuration

![sglang](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/sglang/sglang.png)

- **Max Running Request**: the max number of concurrent requests
- **Max Prefill Tokens** (per batch): the maximum number of tokens that can be processed in a single prefill operation. This controls the batch size for the prefill phase and helps manage memory usage during prompt processing.
- **Chunked prefill size**: sets how many tokens are processed at once during the prefill phase. If a prompt is longer than this value,
it will be split into smaller chunks and processed sequentially to avoid out-of-memory errors during prefill with long prompts.
For example, setting --chunked-prefill-size 4096 means each chunk will have up to 4096 tokens processed at a time. Setting this to -1
means disabling chunked prefill. 
- **Tensor Parallel Size**: the number of GPUs to use for tensor parallelism. This enables model sharding across multiple GPUs
to handle larger models that don't fit on a single GPU. For example, setting this to 2 will split the model across 2 GPUs.
- **KV Cache DType**: the data type used for storing the key-value cache during generation. Options include "auto", "fp8_e5m2",
and "fp8_e4m3". Using lower precision types can reduce memory usage but may slightly impact generation quality.

For more advanced configuration you can pass any of the [Server Arguments that SGlang supports](https://docs.sglang.ai/backend/server_arguments.html)
as container arguments. For example changing the `schedule-policy` to `lpm` would look like this:

![sglang-advanced](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/sglang/sglang-advanced.png)

## Supported models

SGlang has wide support for large language models, multimodal language models, embedding models and more. We recommend reading the
[supported models](https://docs.sglang.ai/supported_models/generative_models.html) section in the SGLang documentation for a full list.

In the Inference Endpoints UI, by default, any model on the Hugging Face Hub that has a `transformers` tag, can be deployed with SGLang.
This is because SGLang [implements a fallback](https://docs.sglang.ai/supported_models/transformers_fallback.html#transformers-fallback-in-sglang) to use transformers
if SGLang doesn't have their own implementation of a model.

## References

We also recommend reading the [SGLang documentation](https://docs.sglang.ai/) for more in-depth information.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/sglang.md" />

### llama.cpp
https://huggingface.co/docs/inference-endpoints/engines/llama_cpp.md

# llama.cpp 

llama.cpp is a high-performance inference engine written in C/C++, tailored for running Llama and compatible models in the GGUF format.

Core features:
- **GGUF Model Support**: Native compatibility with the GGUF format and all quantization types that comes with it.
- **Multi-Platform**: Optimized for both CPU and GPU execution, with support for AVX, AVX2, AVX512, and CUDA acceleration.
- **OpenAI-Compatible API**: Provides endpoints for chat, completion, embedding, and more, enabling seamless integration with existing tools and workflows.
- **Active Community and Ecosystem**: Rapid development and a rich ecosystem of tools, extensions, and integrations


When you create an endpoint with a [GGUF](https://huggingface.co/docs/hub/en/gguf) model,
a [llama.cpp](https://github.com/ggerganov/llama.cpp) container is automatically selected
using the latest image built from the `master` branch of the llama.cpp repository.
Upon successful deployment, a server with an OpenAI-compatible endpoint becomes available.

llama.cpp supports multiple endpoints like `/tokenize`, `/health`, `/embedding`, and many more. For a comprehensive list of available endpoints, please refer to the [API documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#api-endpoints).

## Deployment Steps

To deploy an endpoint with a llama.cpp container, follow these steps:

1. [Create a new endpoint](./create_endpoint) and select a repository containing a GGUF model. The llama.cpp container will be automatically selected.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/endpoints/llamacpp_1.png" alt="Select model" />

2. Choose the desired GGUF file, noting that memory requirements will vary depending on the selected file. For example, an F16 model requires more memory than a Q4_K_M model.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/endpoints/llamacpp_2.png" alt="Select GGUF file" />

3. Select your desired hardware configuration.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/endpoints/llamacpp_3.png" alt="Select hardware" />

4. Optionally, you can customize the container's configuration settings like `Max Tokens`, `Number of Concurrent Requests`. For more information on those, please refer to the **Configurations** section below.

5. Click the **Create Endpoint** button to complete the deployment.

Alternatively, you can follow the video tutorial below for a step-by-step guide on deploying an endpoint with a llama.cpp container:

<video width="1280" height="720" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/endpoints/llamacpp_guide_video.mp4" controls="true" />

## Configurations

The llama.cpp container offers several configuration options that can be adjusted. After deployment, you can modify these settings by accessing the **Settings** tab on the endpoint details page.

### Basic Configurations

- **Max Tokens (per Request)**: The maximum number of tokens that can be sent in a single request.
- **Max Concurrent Requests**: The maximum number of concurrent requests allowed for this deployment. Increasing this limit requires additional memory allocation. 
For instance, setting this value to 4 requests with 1024 tokens maximum per request requires memory capacity for 4096 tokens in total.

### Advanced Configurations

In addition to the basic configurations, you can also modify specific settings by setting environment variables.
A list of available environment variables can be found in the [API documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md#usage).

Please note that the following environment variables are reserved by the system and cannot be modified:

- `LLAMA_ARG_MODEL`
- `LLAMA_ARG_HTTP_THREADS`
- `LLAMA_ARG_N_GPU_LAYERS`
- `LLAMA_ARG_EMBEDDINGS`
- `LLAMA_ARG_HOST`
- `LLAMA_ARG_PORT`
- `LLAMA_ARG_NO_MMAP`
- `LLAMA_ARG_CTX_SIZE`
- `LLAMA_ARG_N_PARALLEL`
- `LLAMA_ARG_ENDPOINT_METRICS`

## Troubleshooting

In case the deployment fails, please watch the log output for any error messages.

You can access the logs by clicking on the **Logs** tab on the endpoint details page. To learn more, refer to the [Logs](./logs) documentation.

- **Malloc failed: out of memory**  
  If you see this error message in the log:

  ```
  ggml_backend_cuda_buffer_type_alloc_buffer: allocating 67200.00 MiB on device 0: cuda
  Malloc failed: out of memory
  llama_kv_cache_init: failed to allocate buffer for kv cache
  llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
  ...
  ```

  That means the selected hardware configuration does not have enough memory to accommodate the selected GGUF model. You can try to:
  - Lower the number of maximum tokens per request
  - Lower the number of concurrent requests
  - Select a smaller GGUF model
  - Select a larger hardware configuration

- **Workload evicted, storage limit exceeded**  
  This error message indicates that the hardware has too little memory to accommodate the selected GGUF model. Try selecting a smaller model or select a larger hardware configuration.

- **Other problems**  
  For other problems, please refer to the [llama.cpp issues page](https://github.com/ggerganov/llama.cpp/issues). In case you want to create a new issue, please also include the full log output in your bug report.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/llama_cpp.md" />

### vLLM
https://huggingface.co/docs/inference-endpoints/engines/vllm.md

# vLLM

vLLM is a high-performance, memory-efficient inference engine for open-source LLMs. It delivers efficient scheduling, KV-cache handling,
batching, and decoding—all wrapped in a production-ready server. For most use cases, TGI, vLLM, and SGLang will be equivalently good options.

**Core features**:
- **PagedAttention for memory efficiency**
- **Continuous batching**
- **Optimized CUDA/HIP execution**
- **Speculative decoding & chunked prefill**
- **Multi-backend and hardware support**: Runs across NVIDIA, AMD, and AWS Neuron to name a few

## Configuration

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/vllm/vllm_config.png)

- **Max Number of Sequences**: The maximum number of sequences (requests) that can be processed together in a single batch. Controls
the batch size by sequence count, affecting throughput and memory usage. For example, if max_num_seqs=8, up to 8 different prompts can
be handled at once, regardless of their individual lengths, as long as the total token count also fits within the Max Number of Batched Tokens.
- **Max Number of Batched Tokens**: The maximum total number of tokens (summed across all sequences) that can be processed in a single
batch. Limits batch size by token count, balancing throughput and GPU memory allocation.
- **Tensor Parallel Size**: The number of GPUs across which model weights are split within each layer. Increasing this allows larger
models to run and frees up GPU memory for KV cache, but may introduce synchronization overhead.
- **KV Cache DType**: the data type used for storing the key-value cache during generation. Options include "auto", "fp8", "fp8_e5m2",
and "fp8_e4m3". Using lower precision types can reduce memory usage but may slightly impact generation quality.

For more advanced configuration you can pass any of the [Engine Arguments that vLLM supports](https://docs.vllm.ai/en/stable/api/vllm/engine/arg_utils.html#vllm.engine.arg_utils.EngineArgs)
as container arguments. For example changing the `enable_lora` to `true` would look like this:

![vllm-advanced](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/vllm/vllm-advanced.png)

## Supported models

vLLM has wide support for large language models and embedding models. We recommend reading the
[supported models](https://docs.vllm.ai/en/stable/models/supported_models.html?h=supported+models) section in the vLLM documentation for a full list.

vLLM also supports model implementations that are available in Transformers. Currently not all models work but support is planned for most
decoder language models are supported, and vision language models.

## References

We also recommend reading the [vLLM documentation](https://docs.vllm.ai/en/stable/) for more in-depth information.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/engines/vllm.md" />

### Configuration
https://huggingface.co/docs/inference-endpoints/guides/configuration.md

# Configuration

This section describes the configuration options available when creating a new inference endpoint. Each section of
the interface allows fine-grained control over how the model is deployed, accessed, and scaled.

## Endpoint name, model and organization

In the top left you can:
- change the name of the inference endpoint
- verify to which organization you're deploying this model
- verify which model you are deploying
- and which Hugging Face Hub repo you are deploying this model from

![name-org-model](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/1-name-org-model.png)

## Hardware Configuration
The Hardware Configuration section allows you to choose the compute backend used to host the model.
You can select from three major cloud providers:
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform

![hardware](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/2-hardware.png)

You must also choose an accelerator type:
- CPU
- GPU
- INF2 (AWS Inferentia)

Additionally, you can select the deployment region (e.g., East US) using the dropdown menu. Once the
provider, accelerator, and region are chosen, a list of available instance types is displayed. Each instance tile includes:

- GPU Type and Count
- Memory (e.g., 48 GB)
- vCPUs and RAM
- Hourly Pricing (e.g., $1.80 / h)

You can select a tile to choose that instance type for your deployment. Instances that are incompatible or unavailable in the
selected region are grayed out and unclickable.

## Authentication

This section determines who can access your deployed endpoint. Available options are:
- **Private (default)**: Accessible only to you, or members of your Hugging Face organization, using a personal HF access token.
- **Public**: Anyone can access your endpoint, without authentication.
- **Authenticated**: Anyone with a Hugging Face account can access it, using their personal HF access tokens.

Additionally, if you deploy your Inference Endpoint in AWS, you can use **AWS PrivateLink** for an intra-region secured connection to your AWS VPN.

![auth](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/11-auth.png)

## Autoscaling

The Autoscaling section configures how many replicas of your model run and whether the system scales down to zero during periods of inactivity. For more
information we recommend reading the [in-depth guide on autoscaling](./autoscaling).

![autoscaling](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/4-autoscaling.png)

- **Automatic Scale-to-Zero**: A dropdown lets you choose how long the system should wait after the last request before
scaling down to zero. Default is after 1 hour with no activity.
- **Number of Replicas**:
    - Min: Minimum number of replicas to keep running. Note that enabling automatic scale-to-zero requires setting this to 0.
    - Max: Maximum number of replicas allowed (e.g., 1)
- **Autoscaling strategy**:
    - Based on hardware usage: For example, a scale up will be triggered if the average hardware utilisation (%) exceeds this threshold for more than 20 seconds.
    - Pending requests: A scale up event will be triggered if the average number of pending requests exceeds this threshold for more than 20 seconds.

## Inference Engine Configuration
This section allows you to specify how the container hosting your model behaves. This setting depends on the selected inference engine.
For configuration details, please read the Inference Engine section.
![inference-engine](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/9-inference-engine.png)

## Container Configuration
Here you can edit the container arguments and container command.
![container-configs](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/8-container-config.png)

## Environment Variables
Environment variables can be provided to customize container behavior or pass secrets.
- **Default Env**: Key-value pairs passed as plain environment variables.
- **Secret Env**: Key-value pairs stored securely and injected at runtime.

Each section allows you to add multiple entries using the Add button.

![env-vars](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/5-env-vars.png)

## Endpoint Tags
You can label endpoints with tags (e.g., for-testing) to help organize and manage deployments across environments or teams. In the dashboard
you will be able to filter and sort endpoints based on these tags.
Tags are plain text labels added via the Add button.

![tags](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/6-tags.png)

## Network
This section determines from where your deployed endpoint can be accessed. 

By default, your endpoint is accessible from the Internet, and secured with TLS/SSL. Endpoints deployed on an AWS instance can use AWS PrivateLink to restrict access to a specific VPC.

The available options are:
- Use AWS PrivateLink: check to activate AWS PrivateLink for your endpoint.
- AWS Account ID: You need to provide the AWS ID of the account that owns the VPC you want to restrict access to.
- PrivateLink Sharing: check to enable sharing of the same PrivateLink between different endpoints.

![network](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/10-network.png)

## Advanced Settings
Advanced Settings offer more fine-grained control over deployment.

![advanced](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/configuration/7-advanced.png)

- **Commit Revision**: Optionally specify a commit hash to which revision of the model repository on the Hugging Face Hub
you want to download the model artifacts from
- **Task**: Defines the type of model task. This is usually inferred from the model repository.
- **Container Arguments**: Pass CLI-style arguments to the container entrypoint.
- **Container Command**: Override the container entrypoint entirely.
- **Download Pattern**: Defines which model files are downloaded.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/configuration.md" />

### Runtime Logs
https://huggingface.co/docs/inference-endpoints/guides/logs.md

# Runtime Logs

The Logs page gives you monitoring and debugging capabilities for your deployed models. This view allows you to track the
operational status and runtime logs of your inference endpoints in real-time.

## Accessing the Logs Interface

The Logs page is accessible through the main navigation tabs within your endpoint dashboard, alongside Overview, Analytics, Usage & Cost,
and Settings. The interface displays logs for your specific model deployment.

![banner](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/logs/logs.png)

## Deployment Status Overview

At the top of the logs interface, you'll find deployment information that provides an at-a-glance view of your Inference Endpoints's current
state. The deployment status section shows the deployment identifier (for example `6pajyw3k` like in the image above) and replica information
(z7ghx), along with the combined deployment-replica identifier (6pajyw3k-z7ghx).

Next to the filter, you'll find a status indicator. The interface also tracks important timestamps, showing when the endpoint was started and
when it was stopped, giving you precise timing information for deployment lifecycle management.

## Log Filtering and Display Options

The logs view provides flexible filtering capabilities to help you focus on the information most relevant to your needs. The filter
controls include toggleable options for Timestamp, Log Level, Content, and Replica information. These filters allow you to customize the
log display based on your specific debugging or monitoring requirements.

The timestamp display defaults to UTC format, ensuring consistent time references across different geographical locations and team members.

The main log display area presents a paginated view of log entries. By default the latest 50 lines are loaded and by clicking the
"Load More" option you can access additional historical log data.

<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/logs.md" />

### Analytics and Metrics
https://huggingface.co/docs/inference-endpoints/guides/analytics.md

# Analytics and Metrics

The Analytics page is like the control center for your deployed models. It tells you in real-time what's going on, how many users are
calling your models, about hardware usage, latencies, and much more. In this documentation we'll dive into what each metric means and
how to analyze the graphs.

![intro](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/1-intro.png)

In the top bar, you can configure the high level view:

- Which replica to view metrics from: either an individual replica or all.
- If you want to view metrics related to requests, hardware, or timeline of replicas.
- Which time frame you'll inspect the metrics, and this setting affects all graphs on the page. You can choose between any of the existing settings from the dropdown, or click-and-drag over any graph for a custom timeframe. You can also enable/disable
auto refresh or view the metrics per replica or all.

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/2-config.png)

## Understanding the graphs

### Number of (HTTP) Requests 

The first graph at the top left shows you how many requests your Inference Endpoint has received. By default they are grouped by HTTP response
classes, but by switching the toggle you can view them by individual status. As a reminder the HTTP response classes are:

- **Informational responses (100-199)**: The server has received your request and is working on it. For example, `102 Processing` means the server is still handling your request.
- **Successful responses (200-299)**: Your request was received and completed successfully. For example, `200 OK` means everything worked as expected.
- **Redirection messages (300-399)**: The server is telling your client to look somewhere else for the information or to take another action. For example, `301 Moved Permanently` means the resource has a new address.
- **Client error responses (400-499)**: There was a problem with the request sent by your client (like a typo in the URL or missing data). For example, `404 Not Found` means the server couldn't find what you asked for.
- **Server error responses (500-599)**: The server ran into an issue while trying to process your request. For example, `502 Bad Gateway` means the server got an invalid response from another server it tried to contact.

We recommend checking the [MDN web docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status) for more information on individual
status codes.

The boxes above the graph also show the % of requests in the respective response class.

![http](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/3-http-reqs.png)

### Pending Requests

Pending requests are requests that have not yet received an HTTP status, meaning they include in-flight requests and requests currently
being processed. If this metric increases too much, it means that your requests are queuing up, and your users have to wait for requests
to finish. In this case you should consider increasing your number of replicas or alternatively use autoscaling, you can read more about
it in the [autoscaling guide](./autoscaling#scalingbasedonpendingrequests(betafeature))

![pending](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/4-pending-reqs.png)

### Latency Distribution

From this graph you'll be able to see how long it takes for your Inference Endpoint to generate a response. Latency is reported as:

- **p99**: meaning that 99% of all requests were faster than this value
- **p95**: meaning that 95% of all requests were faster than this value
- **p90**: meaning that 90% of all requests were faster than this value
- **median**: meaning that 50% of all requests were faster than this value

Usually a good metric is also to look at how big the difference is between the median and p99. The closer the values are to each other, the more
uniform the latency is, whereas if the difference is large, it means that the users of your Inference Endpoint have in general a fast response but
the worst case latencies can be long.

![latency](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/5-latency.png)

### Running Replicas

In the running replica graph, you'll see how many running replicas you have during a point in time. The red line shows
your current maximum replicas setting. 

![status](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/6-running.png)

For a more advanced view of different statuses for individual replicas, going from *pending* all the way
to *running*, you can toggle to the Timeline section. This is very useful to get a sense of how long it takes an Endpoint to become ready to serve requests.

![advanced](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/7-timeline.png)

### Compute 

These four graphs are dedicated to hardware usage. You'll find:

- CPU usage: How much processing power is being used.
- Memory usage: How much RAM is being used.
- GPU usage: How much of the GPU's processing power is being used.
- GPU Memory (VRAM) usage: How much GPU memory is being used.

![usage](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/analytics/8-usage.png)

By toggling "details" you can either view the average or per replica value for the metric in question.

If you have autoscaling based on hardware utilization enabled, these are the metrics that determine your autoscaling behaviour. You can
read more about autoscaling [here](./autoscaling#scalingbasedonhardwareutilization)

## Create an integration with the Inference Endpoints OpenMetrics API

**This feature is currently in Beta. You will need to be subscribed to [Team or Enterprise](https://huggingface.co/pricing) to take advantage of this feature.**

You can export real-time metrics from your Inference Endpoints into your own monitoring stack. The Metrics API exposes metrics in the OpenMetrics format, which is widely supported by observability tools such as Prometheus, Grafana, and Datadog.

This allows you to monitor in near real-time:
- Requests grouped by replica
- Latency distributions (p50, p95, etc.)
- Hardware metrics (CPU, GPU, memory, accelerator utilization)

### Query metrics manually

You can use `curl` to query the metrics endpoint directly and inspect the raw data:
```bash
curl -X GET "https://api.endpoints.huggingface.cloud/v2/endpoint/{namespace}/{endpoint-name}/open-metrics" \
  -H "Authorization: Bearer YOUR_AUTH_TOKEN"
```

This will return metrics in OpenMetrics text format:
```bash
# HELP latency_distribution Latency distribution
# TYPE latency_distribution summary
latency_distribution{quantile="0.5"} 0.006339203
latency_distribution{quantile="0.9"} 0.007574241
latency_distribution{quantile="0.95"} 0.007994495
latency_distribution{quantile="0.99"} 0.020140918
latency_distribution_count 4
latency_distribution_sum 0.042048857
# HELP http_requests HTTP requests by code and replicas
# TYPE http_requests counter
http_requests{replica_id="fqwg7eri-hskoj",status_code="200"} 1152
http_requests{replica_id="q9cv26ut-3vo4s",status_code="200"} 1
# HELP cpu_usage_percent CPU percent
# TYPE cpu_usage_percent gauge
# UNIT cpu_usage_percent percent
```

### Connect with your observability tools

OpenMetrics is widely supported across monitoring ecosystems. A few common options:
- [Datadog OpenMetrics integration](https://docs.datadoghq.com/integrations/openmetrics/)
- [Grafana Prometheus datasource](https://tinyurl.com/e4fypk5m)

From there, you can set up dashboards, alerts, and reports to monitor endpoint performance.

### Subscribe to Team or Enterprise

Your organization can sign up for the Team or Enterprise plan [here](https://huggingface.co/enterprise?subscribe=true) 🚀 
For any questions or feature requests, please email us at api-enterprise@huggingface.co


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/analytics.md" />

### Security & Compliance
https://huggingface.co/docs/inference-endpoints/guides/security.md

# Security & Compliance

Inference Endpoints are built with security and secure inference at their core. Below you can find an overview of the security measures
we have in place.

## Data Security & Privacy

Hugging Face does not store any customer data in terms of payloads or tokens that are passed to the Inference Endpoint.
We are storing logs for 30 days. Every Inference Endpoints uses TLS/SSL to encrypt the data in transit.

We also recommend using AWS Private Link for organizations. This allows you to access your Inference Endpoint through a
private connection, without exposing it to the internet.

Hugging Face also offers GDPR data processing agreement through an Enterprise Hub subscription. For more information or to
subscribe to Enterprise Hub, please visit https://huggingface.co/enterprise.

## Model Security & Privacy

You can set a model repository as private if you do not want to publicly expose it. Hugging Face does not own any model or
data you upload to the Hugging Face Hub. Hugging Face also provides malware and pickle scans over the contents of the model
repository as with all items in the Hub.

## Inference Endpoints and Hub Security

The Hugging Face Hub and Inference Endpoints are SOC2 Type 2 certified. The Hugging Face Hub also offers Role Based Access Control. 

You can read more about security at Hugging Face in general in the following links:
- information on Hugging Face Hub security: https://huggingface.co/docs/hub/security. 
- information on the Enterprise Hub subscription and its premium security features: https://huggingface.co/docs/hub/enterprise-hub

<div class="flex justify-center">
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/security-soc-1.jpg" alt="soc-1" class="max-w-[300px] w-full" />
</div>

## Inference Endpoint Security Level

We currently offer many different ways of securing your Inference Endpoints through the configuration. Please read more about it in the Inference Endpoints
configuration [section under security](https://huggingface.co/docs/inference-endpoints/main/en/guides/configuration#security-level).

## Further Information

You can read the Hugging Face Privacy Policy at: https://huggingface.co/privacy


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/security.md" />

### Foundations
https://huggingface.co/docs/inference-endpoints/guides/foundations.md

# Foundations

The Inference Endpoints dashboard is the central interface to manage, monitor, and deploy inference endpoints across
multiple organizations and accounts. Users can switch between organizations, view endpoint statuses, manage quotas, and
access deployment configurations. You can access the dashboard by logging in on [endpoints.huggingface.co](https://endpoints.huggingface.co)

## Managing Endpoints

### Creating New Endpoints
Click the + New button in the top section to create a new endpoint deployment. This will take you to the Model Catalog which
provides access to 100+ pre-configured models available for deployment as inference endpoints. Use this to browse,
filter, and deploy models directly.

![new](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/1-new.png)

If you cannot find a suitable model in the catalog you can click the "Deploy From Hugging Face" button which allows you to deploy from
any Hugging Face repository.

![catalog](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/2-catalog.png)

After this you will be directed to the configuration page. You can read [here](./configuration) more in detail about all the configuration options.

### Endpoint States
Endpoints can be in one of several states:
- **Running**: Endpoint is ready to serve requests
- **Initializing**: Endpoint is starting up 
- **Paused**: Endpoint has been stopped, which counts towards your quota
- **Scaled to Zero**: Endpoint is idle and consuming no compute resources
- **Failed**: Endpoint encountered an error and is not operational

### Managing existing endpoints

The endpoint details page provides information and lets you control the configuration of an individual endpoint.
Access this view by clicking on any endpoint from the main endpoints list.

The endpoint name displays with its current state. You can pause a running endpoint or wake up an endpoint scaled to zero.

- **Overview**: Current status and configuration summary
- **Analytics**: Performance metrics and usage statistics, for more in-depth reading please [visit here](./analytics)
- **Logs**: Runtime logs and debugging information, more in-depth docs can be found [here](./logs)
- **Usage & Cost**: Billing information and resource consumption
- **Settings**: Configuration and management options

The page displays the configuration options that are available for each endpoint. You'll find a more in-depth walk-through of all options under
the [configuration section](./configuration)

![endpoint](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/8-endpoint.png)

## Using the Dashboard

### Viewing Endpoint Information
The endpoints table displays critical information for each deployment. Click Edit Columns to show or hide specific
information columns. Available columns include State, Task, Instance, Vendor, Container, Access, Tags, URL, Created, and Updated timestamps

![list](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/3-list.png)

### Filtering and Search
Use the search bar to filter endpoints by name, provider, task, or tags.
The Status dropdown allows filtering by specific endpoint states.

![filter](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/4-filter.png)

### Account Management
Access account settings through the dropdown menu in the top-right corner. This provides access to organization switching,
billing information, and access token management.

![account](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/5-account.png)

## Quotas
The Quotas section displays your current resource usage and limits across different cloud providers and hardware types.
Access this view to monitor consumption and request additional capacity when needed.

Note that:
- *Paused* endpoints will not count against 'used' quota.
- *Scaled to Zero* endpoints will be counted as 'used' quota—simply pause the scaled-to-zero endpoint if you would like to unlock this quota. 

![quotas](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/6-quotas.png)

### Requesting Additional Quota
Use the Request More button to submit requests for increased limits when approaching quota thresholds. This allows you to
scale your inference deployments beyond current allocations. Or click the button below:

<a class="btn !rounded-full !text-smd" href="https://endpoints.huggingface.co/contact" target="_blank"><svg xmlns="http://www.w3.org/2000/svg" class="mr-1.5" width="1em" height="1em" fill="currentColor" stroke="currentColor" viewBox="0 0 32 32"><path d="M16 21C14.8449 21.001 13.7075 20.7158 12.6896 20.1697C11.6717 19.6237 10.805 18.8339 10.167 17.871L11.833 16.764C12.2891 17.4517 12.9083 18.0159 13.6353 18.4062C14.3624 18.7964 15.1748 19.0007 16 19.0007C16.8252 19.0007 17.6376 18.7964 18.3647 18.4062C19.0917 18.0159 19.7109 17.4517 20.167 16.764L21.833 17.871C21.195 18.8339 20.3283 19.6237 19.3104 20.1697C18.2925 20.7158 17.1551 21.001 16 21ZM20 10C19.6044 10 19.2178 10.1173 18.8889 10.3371C18.56 10.5568 18.3036 10.8692 18.1522 11.2346C18.0009 11.6001 17.9613 12.0022 18.0384 12.3902C18.1156 12.7782 18.3061 13.1345 18.5858 13.4142C18.8655 13.6939 19.2219 13.8844 19.6098 13.9616C19.9978 14.0388 20.3999 13.9991 20.7654 13.8478C21.1308 13.6964 21.4432 13.44 21.6629 13.1112C21.8827 12.7823 22 12.3956 22 12C22.0027 11.7366 21.9528 11.4754 21.8532 11.2315C21.7536 10.9876 21.6064 10.7661 21.4202 10.5798C21.2339 10.3936 21.0124 10.2464 20.7685 10.1468C20.5247 10.0472 20.2634 9.99734 20 10ZM12 10C11.6044 10 11.2178 10.1173 10.8889 10.3371C10.56 10.5568 10.3036 10.8692 10.1522 11.2346C10.0009 11.6001 9.96126 12.0022 10.0384 12.3902C10.1156 12.7782 10.3061 13.1345 10.5858 13.4142C10.8655 13.6939 11.2219 13.8844 11.6098 13.9616C11.9978 14.0388 12.3999 13.9991 12.7654 13.8478C13.1308 13.6964 13.4432 13.44 13.6629 13.1112C13.8827 12.7823 14 12.3956 14 12C14.0027 11.7366 13.9528 11.4754 13.8532 11.2315C13.7536 10.9876 13.6064 10.7661 13.4202 10.5798C13.2339 10.3936 13.0124 10.2464 12.7685 10.1468C12.5247 10.0472 12.2634 9.99734 12 10Z" stroke-width="0.2"></path><path d="M17.736 32L16 31L20 24H26C26.2628 24.0004 26.523 23.9489 26.7658 23.8486C27.0087 23.7482 27.2293 23.6009 27.4151 23.4151C27.6009 23.2293 27.7482 23.0087 27.8486 22.7658C27.9489 22.523 28.0004 22.2628 28 22V8C28.0004 7.73725 27.9489 7.477 27.8486 7.23417C27.7482 6.99134 27.6009 6.7707 27.4151 6.58491C27.2293 6.39911 27.0087 6.25181 26.7658 6.15144C26.523 6.05107 26.2628 5.9996 26 6H6C5.73725 5.9996 5.477 6.05107 5.23417 6.15144C4.99134 6.25181 4.7707 6.39911 4.58491 6.58491C4.39911 6.7707 4.25181 6.99134 4.15144 7.23417C4.05107 7.477 3.9996 7.73725 4 8V22C3.9996 22.2628 4.05107 22.523 4.15144 22.7658C4.25181 23.0087 4.39911 23.2293 4.58491 23.4151C4.7707 23.6009 4.99134 23.7482 5.23417 23.8486C5.477 23.9489 5.73725 24.0004 6 24H15V26H6C4.93913 26 3.92172 25.5786 3.17157 24.8284C2.42143 24.0783 2 23.0609 2 22V8C2 6.93913 2.42143 5.92172 3.17157 5.17157C3.92172 4.42143 4.93913 4 6 4H26C27.0609 4 28.0783 4.42143 28.8284 5.17157C29.5786 5.92172 30 6.93913 30 8V22C30 23.0609 29.5786 24.0783 28.8284 24.8284C28.0783 25.5786 27.0609 26 26 26H21.165L17.736 32Z" stroke-width="0.2"></path></svg>Request More</a>

## Audit Logs
The Audit Logs section provides a chronological record of all actions performed on your inference endpoints. You can use this
to track changes, troubleshoot issues, and maintain security oversight of your deployments.

Use the All Endpoints dropdown to filter logs by specific endpoint instances. This allows you to focus on activity for particular
deployments.

![audit](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/foundations/7-audit.png)

### Log Entry Structure
Each audit log entry contains:
- **User Avatar and name**
- **Action Type**: Type of operation performed (resumed, updated etc.)
- **Endpoint Name**
- **Timestamp**
- **Action Details**:
    - Instance Changes: For example hardware scaling modifications
    - Configuration Updates: Parameter adjustments
    - State Changes: Operational status modifications
- **Request Metadata**: Technical details for troubleshooting:
    - IP Address: Source IP of the request
    - X-Request-Id: Unique identifier for tracking API calls


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/foundations.md" />

### Create a Private Endpoint with AWS PrivateLink
https://huggingface.co/docs/inference-endpoints/guides/private_link.md

# Create a Private Endpoint with AWS PrivateLink

AWS PrivateLink enables you to privately connect your VPC to your Inference Endpoints, without exposing your traffic to the public
internet. It uses private IP addresses to route traffic between your VPC and the service, ensuring that data never traverses the
public internet, providing enhanced security and compliance benefits.

To create a Private Endpoint, you'll need to connect the AWS Account using the account ID. The following guide will walk you
through how to set it up.

## Configuring the AWS Private Link

### 1. Configure the Private Link

Under the "Security Level" setting you can toggle open the "AWS Private Link" section. The Private Link ensures the endpoint is only available through an intra-region secured AWS PrivateLink connection.

After providing your AWS Account ID and any other required information, click Create Endpoint. The endpoint creation process will begin.

![select private link](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_private_type.png)

After a few minutes, the endpoint will be created, and you will see the VPC Service Name in the overview. This name is necessary for
creating the VPC Interface Endpoint in your AWS account.

![vpc service name](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_3_vpc_ready.png)

### 2. Connect your VPC to your Interface Endpoint

Go to your AWS [console](https://console.aws.amazon.com/vpc/home?#Endpoints) and navigate to the VPC section to create the VPC Interface
Endpoint. Select "Other endpoint services" and enter the VPC Service Name provided earlier.

![add private link](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_4_add_private_link.png)

Verify the service name to ensure the connection is correct. Choose the VPC and subnets you wish to use for this endpoint. Make sure
they align with your security requirements.

![vpc endpoint](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_5_add_vpc.png)

### 3. Ready to Connect

After the VPC Endpoint status changes from pending to available, you should see an Endpoint URL in the overview. This URL can now
be used inside your VPC to access your endpoint in a secure and protected way, ensuring traffic is only occurring between the two
endpoints and will never leave AWS.

![endpoint running](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_6_running_endpoint.png)

## Shared Private Services 

If you have enabled the PrivateLink sharing option, you can now create additional endpoints that share the same VPC Endpoint. This
allows you to connect multiple endpoints to the same VPC Endpoint.

![shared private link](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/6_7_private_service_tooltip.png)



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/private_link.md" />

### Autoscaling
https://huggingface.co/docs/inference-endpoints/guides/autoscaling.md

# Autoscaling

Autoscaling allows you to dynamically adjust the number of endpoint replicas running your models based on traffic and hardware
utilization. By leveraging autoscaling, you can seamlessly handle varying workloads while optimizing costs and ensuring high availability.

You can find the autoscaling setting for you endpoints under the "Settings tab" on the Inference Endpoint card.

![settings](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/autoscaling/settings.png)

<Tip>
In the Analytics section of the guide, you can read more about how to track all the metrics mentioned in this documentation.
</Tip>


## Scale to Zero

Scaling to zero means that your Inference Endpoint will go idle after a given duration (1 hour by default) of inactivity. This is typically
very useful when you want to optimize for low costs or when your workloads are intermittent. 

Scaling to zero replicas helps optimize cost savings by minimizing resource usage during periods of inactivity. However, it's important to
be aware that scaling to zero implies a cold start period when the endpoint receives a new request. Additionally, the proxy will
respond with the status code `503` while the new replica is initializing. To potentially avoid this, you can also add the
'X-Scale-Up-Timeout' header to your requests. This means that when the endpoint is scaling the proxy will hold the request until a replica
is ready, or timeout after the specified amount of seconds. For example 'X-Scale-Up-Timeout: 600' would wait for 600 seconds.

<Tip>
Note that scaling up can take a few minutes depending on the model, which means that scaling from 0 to 1 based on a request is typically not recommended if your
application needs to be responsive. 
</Tip>

## Number of replicas

With this setting, you can change the maximum and minimum amount of replicas. This means that you control the ceiling and the floor of your costs.
Typically, you'd set the minimum to a value such that at the lowest amount of traffic, you're still serving your users at an acceptable rate.
And the maximum so that you stay within budget, but so that you can serve your users even at the highest points of traffic.

<Tip>
Note that if scale to zero is enabled, the minimum number of replicas needs to be 0.
</Tip>

## Autoscaling Strategy

For the autoscaling system to work well there needs to be a signal that tells when to scale up and down. For this we have two strategies.

### Scaling based on hardware utilization

The autoscaling process is triggered based on the hardware utilization metrics. The criteria for scaling differ depending on the
type of accelerator being used:

- **CPU**: A new replica is added when the average CPU utilization of all replicas reaches the threshold value (default 80%).
- **GPU**: A new replica is added when the average GPU utilization of all replicas over a 1-minute window reaches the threshold value (default 80%).

It's important to note that the scaling up process takes place every minute and scaling down takes place every 2 minutes. This
frequency ensures a balance between responsiveness and stability of the autoscaling system, with a stabilization of 300 seconds
once scaled down.

You can also track the hardware utilization metrics in the Analytics tab, or read more about it [here](./analytics#hardwareutilisation).

### Scaling based on pending requests

In some cases, the hardware utilization is not a 'fast' enough metric. The reason is that hardware metrics are always slightly lagging from
the actual requests. A metric that is more of a leading indicator is pending requests.

- **Pending requests** are requests that have not yet received an HTTP status, meaning they include in-flight requests and requests currently being processed.
- **By default**, if there are more than 1.5 pending requests per replica in the past 20 seconds, it triggers an autoscaling event and adds a replica to your deployment.
You can adjust this threshold value to meet your specific requirements under Endpoint settings.

Similarly to the hardware metrics, you can track the pending requests in the Analytics tab, or read more about it [here](./analytics#pendingrequests).


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/autoscaling.md" />

### FAQs
https://huggingface.co/docs/inference-endpoints/support/faq.md

# FAQs 

## General questions

### In which regions can I deploy an Inference Endpoints?
Inference Endpoints are currently available on AWS in us-east-1 (N. Virginia) & eu-west-1 (Ireland), on Azure in eastus (Virginia), and on
GCP in us-east4 (Virginia). If you need to deploy in a different region, please let us know.

### Can I access the instance my Endpoint is running on?
No, you cannot access the instance hosting your Endpoint. But if you are missing information or need more insights on the machine where
the Endpoint is running, please contact us. 

### What's the difference between Inference Providers and Inference Endpoints? 
The [Inference Providers](https://huggingface.co/docs/inference-providers/index) is a solution to easily explore and evaluate models. Its a
single consistent API Inference giving access to Hugging Face partners, that host a wide selection of AI models. Inference Endpoints is a
service for you to deploy your models on managed infrastructure.

### How much does it cost to run my Endpoint?
Dedicated Endpoints are billed based on the compute hours of your Running Endpoints, and the associated instance types. We may add usage
costs for load balancers and Private Links in the future. 

### How do I monitor my deployed Endpoint?
You can currently monitor your Endpoint through the [Inference Endpoints web application](https://endpoints.huggingface.co/endpoints),
where you have access to the [Logs of your Endpoints](/docs/inference-endpoints/guides/logs) as well as a
[metrics dashboard](/docs/inference-endpoints/guides/analytics). 

## Security

### Is the data transiting to the Endpoint encrypted?
Yes, data is encrypted during transit with TLS/SSL.

### I accidentally leaked my token. Do I need to delete my endpoint?
You can invalidate existing personal tokens and create new ones in your settings here: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
Please use fine-grained tokens when possible!

### Can I see my Private Endpoint running on my VPC account?
No, when creating a Private Endpoint (a Hugging Face Inference Endpoint linked to your VPC via AWS PrivateLink), you can only see the
ENI in your VPC where the Endpoint is available. 

## Configuration

### How can I scale my deployment?
The Endpoints are scaled automatically for you. You can set a minimum and maximum amount of replicas, and the system will scale them up and down
depending on the scaling strategy you configured. We recommend reading the [autoscaling section](./guides/autoscaling) for more information 

### Will my endpoint still be running if no more requests are processed?
Unless you allowed scale-to-zero your Inference Endpoint will always stay available/up with the number of min replicas defined in the Autoscaling
configuration 

### I would like to deploy a model which is not in the supported tasks, is this possible?
Yes, you can deploy any repository from the [Hugging Face Hub](https://huggingface.co/models) and if your task/model/framework is not
supported out of the box. For this we recommended setting up a [custom container](./engines/custom_container)

### What if I would like to deploy to a different instance type that is not listed?
Please contact us if you feel your model would do better on a different instance type than what is listed.

### I need to add a custom environment variable (default or secrets) to my endpoint. How can I do this?
This is now possible in the UI, or via the API:
```
{
  "model": {
    "image": {
      "huggingface": {
        "env": { "var1": "value" }
      }
    },
}
```

## Inference Engines

### Can I run inference in batches?
In most cases yes but it depends on the Inference Engine. In practice all high performance Inference Engines like vLLM, TGI, llama.cpp, SGLang
and TEI support batching, whereas the Inference Toolkit might not. Each Inference Enginge also has configuration to adjust batch sizes, we recommend
reading up on the documentation to understand best how to tune the configuration to meet your needs.

### I'm using a specific Inference Engine type for my Endpoint. Is there more information about how to use it? 
Yes! Please check the Inference Engines section and also check out the Engines own documentation.

## Debugging

### I can see from the logs that my endpoint is running but the status is stuck at "initializing"
This usually means that the port mapping is incorrect. Ensure your app is listening on port 80 and that the Docker container is exposing
port 80 externally. If you're deploying a custom container you can change these values, but make sure to keep them aligned.

### I'm getting a 500 response in the beginning of my endpoint deployment or when scaling is happening
Confirm that you have a health route implemented in your app that returns a status code 200 when your application is ready to serve
requests. Otherwise your app is considered ready as soon as the container has started, potentially resulting in 500s. You can configure
the health route in the Container Configuration of your Endpoint. 

You can also add the 'X-Scale-Up-Timeout' header to your requests. This means that when the endpoint is scaling the proxy will hold
requests until a replica is ready, or timeout after the specified amount of seconds. For example 'X-Scale-Up-Timeout: 600'

### I see there's an option to select a Download Pattern under Instance Configuration. What does this mean? 
You have an option to choose the download pattern of the model's files when deploying an Endpoint, to help with limiting the volume of
downloaded files. If a selected download pattern is not possible or compatible with the model, the system will not allow a change to the
pattern.

### I'm sometimes running into a 503 error on a running endpoint in production. What can I do? 
To help mitigate service interruptions on an Inference Endpoint that needs to be highly available, please make sure to use at least 2 replicas,
ie min replicas set to 2.



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/support/faq.md" />

### Pricing
https://huggingface.co/docs/inference-endpoints/support/pricing.md

# Pricing

When you create an Endpoint, you can select the instance type to deploy and scale your model according to an hourly rate.
Inference Endpoints is accessible to Hugging Face accounts with an active subscription and credit card on file. At
the end of the billing period, the user or organization account will be charged for the compute resources used while
successfully deployed Endpoints (ready to serve) are *initializing* and in a *running* state.

Below, you can find the hourly pricing for all available instances and accelerators, and examples of how costs are calculated:
While the prices are shown by the hour, the actual cost is billed per minute.

## CPU Instances

The table below shows currently available CPU instances and their hourly pricing. If the instance type cannot be selected in the application, you need to [request quota](https://endpoints.huggingface.co/contact) to use it.

| Provider | Instance Type | Instance Size | Hourly rate | vCPUs | Memory | Architecture                                |
| -------- | ------------- | ------------- | ----------- | ----- | ------ | ------------------------------------------- |
| aws      | intel-spr     | x1            | $0.033      | 1     | 2 GB   | Intel Sapphire Rapids                       |
| aws      | intel-spr     | x2            | $0.067      | 2     | 4 GB   | Intel Sapphire Rapids                       |
| aws      | intel-spr     | x4            | $0.134      | 4     | 8 GB   | Intel Sapphire Rapids                       |
| aws      | intel-spr     | x8            | $0.268      | 8     | 16 GB  | Intel Sapphire Rapids                       |
| aws      | intel-spr     | x16           | $0.536      | 16    | 32 GB  | Intel Sapphire Rapids                       |
| azure    | intel-xeon    | x1            | $0.060      | 1     | 2 GB   | Intel Xeon                                  |
| azure    | intel-xeon    | x2            | $0.120      | 2     | 4 GB   | Intel Xeon                                  |
| azure    | intel-xeon    | x4            | $0.240      | 4     | 8 GB   | Intel Xeon                                  |
| azure    | intel-xeon    | x8            | $0.480      | 8     | 16 GB  | Intel Xeon                                  |
| gcp      | intel-spr     | x1            | $0.050      | 1     | 2 GB   | Intel Sapphire Rapids                       |
| gcp      | intel-spr     | x2            | $0.100      | 2     | 4 GB   | Intel Sapphire Rapids                       |
| gcp      | intel-spr     | x4            | $0.200      | 4     | 8 GB   | Intel Sapphire Rapids                       |
| gcp      | intel-spr     | x8            | $0.400      | 8     | 16 GB  | Intel Sapphire Rapids                       |
| *aws*      | *intel-icl*     | *x1*            | *$0.032*      | *1*     | *2 GB*   | *Intel Ice Lake - Deprecated from July 2025*|
| *aws*      | *intel-icl*     | *x2*            | *$0.064*      | *2*     | *4 GB*   | *Intel Ice Lake - Deprecated from July 2025*|
| *aws*      | *intel-icl*     | *x4*            | *$0.128*      | *4*     | *8 GB*   | *Intel Ice Lake - Deprecated from July 2025*|
| *aws*      | *intel-icl*     | *x8*            | *$0.256*      | *8*     | *16 GB*  | *Intel Ice Lake - Deprecated from July 2025*| 


## GPU Instances

The table below shows currently available GPU instances and their hourly pricing. If the instance type cannot be selected in the application, you need to [request quota](https://endpoints.huggingface.co/contact) to use it.

| Provider | Instance Type | Instance Size | Hourly rate | GPUs | Memory | Architecture |
| -------- | ------------- | ------------- |------------ | ---- | ------ | ------------ |
| aws      | nvidia-t4     | x1            | $0.5        | 1    | 14 GB  | NVIDIA T4    |
| aws      | nvidia-t4     | x4            | $3          | 4    | 56 GB  | NVIDIA T4    |
| aws      | nvidia-l4     | x1            | $0.8        | 1    | 24 GB  | NVIDIA L4    |
| aws      | nvidia-l4     | x4            | $3.8        | 4    | 96 GB  | NVIDIA L4    |
| aws      | nvidia-a10g   | x1            | $1          | 1    | 24 GB  | NVIDIA A10G  |
| aws      | nvidia-a10g   | x4            | $5          | 4    | 96 GB  | NVIDIA A10G  |
| aws      | nvidia-l40s   | x1            | $1.8        | 1    | 48 GB  | NVIDIA L40S  |
| aws      | nvidia-l40s   | x4            | $8.3        | 4    | 192 GB | NVIDIA L40S  |
| aws      | nvidia-l40s   | x8            | $23.5       | 8    | 384 GB | NVIDIA L40S  |
| aws      | nvidia-a100   | x1            | $2.5        | 1    | 80 GB  | NVIDIA A100  |
| aws      | nvidia-a100   | x2            | $5          | 2    | 160 GB | NVIDIA A100  |
| aws      | nvidia-a100   | x4            | $10         | 4    | 320 GB | NVIDIA A100  |
| aws      | nvidia-a100   | x8            | $20         | 8    | 640 GB | NVIDIA A100  |
| aws      | nvidia-h100   | x1            | $4.5        | 1    | 80 GB  | NVIDIA H100  |
| aws      | nvidia-h100   | x2            | $9          | 2    | 160 GB | NVIDIA H100  |
| aws      | nvidia-h100   | x4            | $18         | 4    | 320 GB | NVIDIA H100  |
| aws      | nvidia-h100   | x8            | $36         | 8    | 640 GB | NVIDIA H100  |
| aws      | nvidia-h200   | x1            | $5          | 1    | 141 GB | NVIDIA H200  |
| aws      | nvidia-h200   | x2            | $10         | 2    | 282 GB | NVIDIA H200  |
| aws      | nvidia-h200   | x4            | $20         | 4    | 564 GB | NVIDIA H200  |
| aws      | nvidia-h200   | x8            | $40         | 8    | 1128 GB| NVIDIA H200  |
| aws      | nvidia-b200   | x1            | $9.25       | 1    | 256 GB | NVIDIA B200  |
| aws      | nvidia-b200   | x2            | $18.5       | 2    | 512 GB | NVIDIA B200  |
| aws      | nvidia-b200   | x4            | $37         | 4    | 1024 GB| NVIDIA B200  |
| aws      | nvidia-b200   | x8            | $74         | 8    | 2048 GB| NVIDIA B200  |
| gcp      | nvidia-t4     | x1            | $0.5        | 1    | 16 GB  | NVIDIA T4    |
| gcp      | nvidia-l4     | x1            | $0.7        | 1    | 24 GB  | NVIDIA L4    |
| gcp      | nvidia-l4     | x4            | $3.8        | 4    | 96 GB  | NVIDIA L4    |
| gcp      | nvidia-a100   | x1            | $3.6        | 1    | 80 GB  | NVIDIA A100  |
| gcp      | nvidia-a100   | x2            | $7.2        | 2    | 160 GB | NVIDIA A100  |
| gcp      | nvidia-a100   | x4            | $14.4       | 4    | 320 GB | NVIDIA A100  |
| gcp      | nvidia-a100   | x8            | $28.8       | 8    | 640 GB | NVIDIA A100  |
| gcp      | nvidia-h100   | x1            | $10         | 1    | 80 GB  | NVIDIA H100  |
| gcp      | nvidia-h100   | x2            | $20         | 2    | 160 GB | NVIDIA H100  |
| gcp      | nvidia-h100   | x4            | $40         | 4    | 320 GB | NVIDIA H100  |
| gcp      | nvidia-h100   | x8            | $80         | 8    | 640 GB | NVIDIA H100  |

## INF2 Instances

The table below shows currently available INF2 instances and their hourly pricing. If the instance type cannot be selected in the application, you need to [request quota](https://endpoints.huggingface.co/contact) to use it.

| Provider | Instance Type | Instance Size | Hourly rate | Accelerators | Accelerator Memory | RAM     | Architecture     |
| -------- | ------------- | ------------- |------------ | ------------ | ------------------ | ------- | ---------------- |
| aws      | inf2          | x1            | $0.75       | 1            | 32 GB              | 14.5 GB | AWS Inferentia2  |
| aws      | inf2          | x12           | $12         | 12           | 384 GB             | 760 GB  | AWS Inferentia2  |

## Pricing examples

The following example pricing scenarios demonstrate how costs are calculated. You can find the hourly rate for all instance types and sizes in the tables above. Use the following formula to calculate the costs:

```
instance hourly rate * ((hours * # min replica) + (scale-up hrs * # additional replicas))
```

### Basic Example

* AWS CPU intel-spr x2 (2x vCPUs 4GB RAM)
* Autoscaling (minimum 1 replica, maximum 1 replica)

**hourly cost**
```
instance hourly rate * (hours * # min replica) = hourly cost
$0.067/hr * (1hr * 1 replica) = $0.067/hr
```

**monthly cost**
```
instance hourly rate * (hours * # min replica) = monthly cost
$0.064/hr * (730hr * 1 replica) = $46.72/month
```

![basic-chart](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/basic-chart.png)

### Advanced Example

* AWS GPU small (1x GPU 14GB RAM)
* Autoscaling (minimum 1 replica, maximum 3 replica), every hour a spike in traffic scales the Endpoint from 1 to 3 replicas for 15 minutes

**hourly cost**
```
instance hourly rate * ((hours * # min replica) + (scale-up hrs * # additional replicas)) = hourly cost
$0.5/hr * ((1hr * 1 replica) + (0.25hr * 2 replicas)) = $0.75/hr
```

**monthly cost**
```
instance hourly rate * ((hours * # min replica) + (scale-up hrs * # additional replicas)) = monthly cost
$0.5/hr * ((730hr * 1 replica) + (182.5hr * 2 replicas)) = $547.5/month
```

![advanced-chart](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/advanced-chart.png)


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/support/pricing.md" />

### Build an embedding pipeline with datasets
https://huggingface.co/docs/inference-endpoints/tutorials/embedding.md

# Build an embedding pipeline with datasets

This tutorial will guide you through deploying an embedding endpoint and building a Python script to efficiently process datasets with embeddings. We'll use the powerful [Qwen/Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) model to create high-quality embeddings for your data.

<Tip>

This tutorial focuses on creating a production-ready script that can process any dataset and add embeddings using the **Text Embeddings Inference (TEI)** engine for optimized performance.

</Tip>

## Create your embedding Endpoint

First, we need to create an Inference Endpoint optimized for embeddings.

Start by navigating to the Inference Endpoints UI, and once you have logged in you should see a button for creating a new Inference
Endpoint. Click the "New" button.

![new-button](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/1-new-button.png)

From there you'll be directed to the catalog. The Model Catalog consists of popular models which have tuned configurations to work as one-click
deploys. You can search for embedding models or create a custom endpoint.

![catalog](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/2-catalog.png)

For this tutorial, we'll use the Qwen3-Embedding-4B model available in the Inference Endpoints Model Catalog. Note if it's ever not in the catalog, you can deploy a model as a custom Endpoint from the Hugging Face Hub by entering the model repository ID `Qwen/Qwen3-Embedding-4B`.

For embedding models, we recommend:
- **GPU**: NVIDIA, T4, L4 or A10G for good performance.
- **Instance Size**: x1 (sufficient for most embedding workloads)
- **Auto-scaling**: Enable scale-to-zero to save costs by switching the endpoint to a paused state when it's not in use.
- **Timeout**: Set a timeout of 10 minutes to avoid long-running requests. You should define a timeout based on how you expect your endpoint to be used.

<Tip>

If you're looking for a model with less compute requirements, you can use the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.

</Tip>

The Qwen3-Embedding-4B model will automatically use the **Text Embeddings Inference (TEI)** engine, which provides optimized inference and automatic batching.

Click "Create Endpoint" to deploy your embedding service.

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/4-config.png)

This may take about 5 minutes to initialize.

## Test your Endpoint

Once your Inference Endpoint is running, you can test it directly in the playground. It accepts text input and returns high-dimensional vectors.

![playground](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/embedding-tutorial/assets/tutorials/embedding/playground.png)

Try entering some sample text like "Machine learning is transforming how we process data" and see the embedding output.

## Get your Endpoint's details

To use your endpoint programmatically, you'll need these details from the Endpoint's [Overview](https://endpoints.huggingface.co/):

- **Base URL**: `https://<endpoint-name>.endpoints.huggingface.cloud/v1/`
- **Model name**: The name of your endpoint
- **Token**: Your HF token from [settings](https://huggingface.co/settings/tokens)

![endpoint-details](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/embedding/endpoint-page.png)

## Building the embedding script

Now let's build a script step by step to process datasets with embeddings. We'll break it down into logical blocks.

### Step 1: Set up dependencies and imports

We'll use the OpenAI client to connect to the endpoint and the datasets library to load and process the dataset. So let's install the required packages:

```bash
pip install datasets openai
```

Then, set up your imports in a new Python file:

```python
import os
from datasets import load_dataset
from openai import OpenAI
```

### Step 2: Configure the connection

Set up the configuration to connect to your Inference Endpoint based on the details you collected in the previous step.

```python
# Configuration
ENDPOINT_URL = "https://your-endpoint-name.endpoints.huggingface.cloud/v1/" # Endpoint URL + version
HF_TOKEN = os.getenv("HF_TOKEN") # Your Hugging Face Hub token from hf.co/settings/tokens

# Initialize OpenAI client for your endpoint
client = OpenAI(
    base_url=ENDPOINT_URL,
    api_key=HF_TOKEN,
)
```

Your OpenAI client is now configured to connect to your Inference Endpoint. For further reading you can check out the client documentation on text embeddings <a href="https://platform.openai.com/docs/api-reference/embeddings" target="_blank" rel="noopener noreferrer">here</a>.

### Step 3: Create the embedding function

Next, we'll create a function to process batches of text and return embeddings. 

```python
def get_embeddings(examples):
    """Get embeddings for a batch of texts."""
    response = client.embeddings.create(
        model="your-endpoint-name",  # Replace with your actual endpoint name
        input=examples["context"], # In the squad dataset, the text is in the "context" column
    )
    
    # Extract embeddings from response objects
    embeddings = [sample.embedding for sample in response.data]
    
    return {"embeddings": embeddings} # datasets expects a dictionary with a key "embeddings" and a value of a list of embeddings
```

<Tip>

The `datasets` library will pass our function a batch of examples from the dataset, as a dictionary of batch values. The key will be the name of the column we want to embed, and the value will be a list of values from that column.

</Tip>

### Step 4: Load and process your dataset

Load your dataset and apply the embedding function:

```python
# Load a sample dataset (you can replace this with your own)
dataset = load_dataset("squad", split="train[:100]")  # Using first 100 examples for demo

# Process the dataset with embeddings
dataset_with_embeddings = dataset.map(
    get_embeddings,
    batched=True,
    batch_size=10,  # Process in small batches to avoid timeouts
    desc="Adding embeddings",
)
```

<Tip>

The `datasets` library's `map` function is optimized for performance and will automatically batch the rows for us. Inference Endpoints can also scale to meet the demand of the batch size, so to get the best performance, you should calibrate the batch size with your Inference Endpoints's configuration.

For example, select the highest possible batch size for you model and synchronize the batch size with your Inference Endpoint's configuration in `max_concurrent_requests`.

</Tip>

### Step 5: Save and Share your results

Finally, let's save our embedded dataset locally or push it to the Hugging Face Hub:

```python
# Save the processed dataset locally
dataset_with_embeddings.save_to_disk("./embedded_dataset")

# Or push directly to Hugging Face Hub
dataset_with_embeddings.push_to_hub("your-username/squad-embeddings")
```

## Next steps

Nice work! You've now built an embedding pipeline that can process any dataset. Here's the complete script:

<details>
<summary>Click to view the complete script</summary>

```python
import os
from datasets import load_dataset
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

# Configuration
ENDPOINT_URL = "https://your-endpoint-name.endpoints.huggingface.cloud/v1/"
HF_TOKEN = os.getenv("HF_TOKEN")

# Initialize OpenAI client for your endpoint
client = OpenAI(
    base_url=ENDPOINT_URL,
    api_key=HF_TOKEN,
)

def get_embeddings(examples):
    """Get embeddings for a batch of texts."""
    response = client.embeddings.create(
        model="your-endpoint-name",  # Replace with your actual endpoint name
        input=examples["context"],
    )
    
    # Extract embeddings from response
    embeddings = [sample.embedding for sample in response.data]
    
    return {"embeddings": embeddings}

# Load a sample dataset (you can replace this with your own)
print("Loading dataset...")
dataset = load_dataset("squad", split="train[:1000]")  # Using first 1000 examples for demo

# Process the dataset with embeddings
print("Processing dataset with embeddings...")
dataset_with_embeddings = dataset.map(
    get_embeddings,
    batched=True,
    batch_size=10,  # Process in small batches to avoid timeouts
    desc="Adding embeddings",
)

# Save the processed dataset locally
print("Saving processed dataset...")
dataset_with_embeddings.save_to_disk("./embedded_dataset")

# Or push directly to Hugging Face Hub
print("Pushing to Hugging Face Hub...")
dataset_with_embeddings.push_to_hub("your-username/squad-embeddings")

print("Dataset processing complete!")
```

</details>

Here are some ways to extend your script:

- **Process multiple datasets**: Modify the script to handle different dataset sources
- **Add error handling**: Implement retry logic for failed API calls
- **Optimize batch sizes**: Experiment with different batch sizes for better performance
- **Add validation**: Check embedding quality and dimensions
- **Custom preprocessing**: Add text cleaning or normalization steps
- **Build a Semantic Search Application**: Use the embeddings to build a semantic search application.

Your embedded datasets are now ready for downstream tasks like semantic search, recommendation systems, or RAG applications!


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/tutorials/embedding.md" />

### Create your own transcription app
https://huggingface.co/docs/inference-endpoints/tutorials/transcription.md

# Create your own transcription app

This tutorial will guide you through building a complete transcription application using Hugging Face Inference Endpoints. We'll create an app that can transcribe audio files and generate intelligent summaries with action items - perfect for meeting notes, interviews, or any audio content.

<Tip>

This tutorial uses Python and Gradio, but you can adapt the approach to any language that can make HTTP requests. The models deployed on Inference Endpoints use standard APIs, so you can integrate them into web applications, mobile apps, or any other system.

</Tip>

## Create your transcription endpoint

First, we need to create an Inference Endpoint for audio transcription. We'll use OpenAI's Whisper model for high-quality speech recognition.

Start by navigating to the Inference Endpoints UI, and once you have logged in you should see a button for creating a new Inference Endpoint. Click the "New" button.

![new-button](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/1-new-button.png)

From there you'll be directed to the catalog. The Model Catalog consists of popular models which have tuned configurations to work as one-click deploys. You can filter by name, task, price of the hardware and much more.

![catalog](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/2-catalog.png)

Search for "whisper" to find transcription models, or you can create a custom endpoint with [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3). This model provides excellent transcription quality for multiple languages and handles various audio formats.

For transcription models, we recommend:
- **GPU**: NVIDIA L4 or A10G for good performance with audio processing
- **Instance Size**: x1 (sufficient for most transcription workloads)
- **Auto-scaling**: Enable scale-to-zero to save costs when not in use

Click "Create Endpoint" to deploy your transcription service.

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/transcriptions/config.png)

Your endpoint will take about 5 minutes to initialize. Once it's ready, you'll see it in the "Running" state.

## Create your text generation endpoint

Now let's do the same again but now for a text generation model. For generating summaries and action items, we'll create a second endpoint using the [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) model.

Follow the same process:
1. Click "New" button in the Inference Endpoints UI
2. Search for `qwen3 1.7b` in the catalog
3. The NVIDIA L4 with x1 instance size is recommended for this model
4. Keep the default settings (scale-to-zero enabled, 1-hour timeout)
5. Click "Create Endpoint"

This model is optimized for text generation tasks and will provide excellent summarization capabilities. Both endpoints will take about 3-5 minutes to initialize.

## Test your endpoints

Once your endpoints are running, you can test them in the playground. The transcription endpoint will accept audio files and return text transcripts.

![playground](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/transcriptions/playground.png)

Test with a short audio sample to verify the transcription quality.

## Get your endpoint details

You'll need the endpoint details from your [endpoints page](https://endpoints.huggingface.co/):

- **Base URL**: `https://<endpoint-name>.endpoints.huggingface.cloud/v1/`
- **Model name**: The name of your endpoint
- **Token**: Your HF token from [settings](https://huggingface.co/settings/tokens)

![endpoint-details](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/endpoint-page.png)

You can validate your details by testing your endpoint out in the command line with curl.

```sh
curl "<endpoint-url>" \
-X POST \
--data-binary '@<audio-file>' \
-H "Accept: application/json" \
-H "Content-Type: audio/flac" \
```

## Building the transcription app

Now let's build a transcription application step by step. We'll break it down into logical blocks to create a complete solution that can transcribe audio and generate intelligent summaries.

### Step 1: Set up dependencies and imports

We'll use the `requests` library to connect to both endpoints and `gradio` to create the interface. Let's install the required packages:

```bash
pip install gradio requests
```

Then, set up your imports in a new Python file:

```python
import os

import gradio as gr
import requests
```

### Step 2: Configure your endpoint connections

Set up the configuration to connect to both your transcription and summarization endpoints based on the details you collected in the previous steps.

```python
# Configuration for both endpoints
TRANSCRIPTION_ENDPOINT = "https://your-whisper-endpoint.endpoints.huggingface.cloud/api/v1/audio/transcriptions"
SUMMARIZATION_ENDPOINT = "https://your-qwen-endpoint.endpoints.huggingface.cloud/v1/chat/completions"
HF_TOKEN = os.getenv("HF_TOKEN")  # Your Hugging Face Hub token

# Headers for authentication
headers = {
    "Authorization": f"Bearer {HF_TOKEN}"
}
```

Your endpoints are now configured to handle both audio transcription and text summarization.

<Tip>

You might also want to use `os.getenv` for your endpoint details.

</Tip>


### Step 3: Create the transcription function

Next, we'll create a function to handle audio file uploads and transcription:

```python
def transcribe_audio(audio_file_path):
    """Transcribe audio using direct requests to the endpoint"""
    
    # Read audio file and prepare for upload
    with open(audio_file_path, "rb") as audio_file:
        # Read the audio file as binary data and represent it as a file object
        files = {"file": audio_file.read()}
    
    # Make the request to the transcription endpoint
    response = requests.post(TRANSCRIPTION_ENDPOINT, headers=headers, files=files)
    
    # Check if the request was successful
    if response.status_code == 200:
        result = response.json()
        return result.get("text", "No transcription available")
    else:
        return f"Error: {response.status_code} - {response.text}"
```

<Tip>

The transcription endpoint expects a file upload in the `files` parameter. Make sure to read the audio file as binary data and pass it correctly to the API.

</Tip>

### Step 4: Create the summarization function

Now we'll create a function to generate summaries from the transcribed text. We'll do some simple prompt engineering to get the best results.

```python
def generate_summary(transcript):
    """Generate summary using requests to the chat completions endpoint"""
    
    # define a nice prompt to get the best results for our use case
    prompt = f"""
    Analyze this meeting transcript and provide:
    1. A concise summary of key points
    2. Action items with responsible parties
    3. Important decisions made
    
    Transcript: {transcript}
    
    Format with clear sections:
    ## Summary
    ## Action Items  
    ## Decisions Made
    """
    
    # Prepare the payload using the Messages API format
    payload = {
        "model": "your-qwen-endpoint-name",  # Use the name of your endpoint
        "messages": [{"role": "user", "content": prompt}],
        "max_tokens": 1000, # we can also set a max_tokens parameter to limit the length of the response
        "temperature": 0.7, # we might want to set lower temperature for more deterministic results
        "stream": False # we don't need streaming for this use case
    }
    
    # Headers for chat completions
    chat_headers = {
        "Accept": "application/json",
        "Content-Type": "application/json",
        "Authorization": f"Bearer {HF_TOKEN}"
    }
    
    # Make the request
    response = requests.post(SUMMARIZATION_ENDPOINT, headers=chat_headers, json=payload)
    response.raise_for_status()
    
    # Parse the response
    result = response.json()
    return result["choices"][0]["message"]["content"]
```

### Step 5: Wrap it all together

Now let's build our Gradio interface. We'll use the `gr.Interface` class to create a simple interface that allows us to upload an audio file and see the transcript and summary.

First, we'll create a main processing function that handles the complete workflow.

```python
def process_meeting_audio(audio_file):
    """Main processing function that handles the complete workflow"""
    if audio_file is None:
        return "Please upload an audio file.", ""
    
    try:
        # Step 1: Transcribe the audio
        transcript = transcribe_audio(audio_file)
        
        # Step 2: Generate summary from transcript
        summary = generate_summary(transcript)
        
        return transcript, summary
    
    except Exception as e:
        return f"Error processing audio: {str(e)}", ""
```

Then, we can run that function in a Gradio interface. We'll add some descriptions and a title to make it more user-friendly.

```python
# Create Gradio interface
app = gr.Interface(
    fn=process_meeting_audio,
    inputs=gr.Audio(label="Upload Meeting Audio", type="filepath"),
    outputs=[
        gr.Textbox(label="Full Transcript", lines=10),
        gr.Textbox(label="Meeting Summary", lines=8),
    ],
    title="🎤 AI Meeting Notes",
    description="Upload audio to get instant transcripts and summaries.",
)
```

That's it! You can now run the app locally with `python app.py` and test it out.

<details>
<summary>Click to view the complete script</summary>

```python
import gradio as gr
import os
import requests

# Configuration for both endpoints
TRANSCRIPTION_ENDPOINT = "https://your-whisper-endpoint.endpoints.huggingface.cloud/api/v1/audio/transcriptions"
SUMMARIZATION_ENDPOINT = "https://your-qwen-endpoint.endpoints.huggingface.cloud/v1/chat/completions"
HF_TOKEN = os.getenv("HF_TOKEN")  # Your Hugging Face Hub token

# Headers for authentication
headers = {
    "Authorization": f"Bearer {HF_TOKEN}"
}

def transcribe_audio(audio_file_path):
    """Transcribe audio using direct requests to the endpoint"""
    
    # Read audio file and prepare for upload
    with open(audio_file_path, "rb") as audio_file:
        files = {"file": audio_file.read()}
    
    # Make the request to the transcription endpoint
    response = requests.post(TRANSCRIPTION_ENDPOINT, headers=headers, files=files)
    
    if response.status_code == 200:
        result = response.json()
        return result.get("text", "No transcription available")
    else:
        return f"Error: {response.status_code} - {response.text}"


def generate_summary(transcript):
    """Generate summary using requests to the chat completions endpoint"""
    
    prompt = f"""
    Analyze this meeting transcript and provide:
    1. A concise summary of key points
    2. Action items with responsible parties
    3. Important decisions made
    
    Transcript: {transcript}
    
    Format with clear sections:
    ## Summary
    ## Action Items  
    ## Decisions Made
    """
    
    # Prepare the payload using the Messages API format
    payload = {
        "model": "your-qwen-endpoint-name",  # Use the name of your endpoint
        "messages": [{"role": "user", "content": prompt}],
        "max_tokens": 1000,
        "temperature": 0.7,
        "stream": False
    }
    
    # Headers for chat completions
    chat_headers = {
        "Accept": "application/json",
        "Content-Type": "application/json",
        "Authorization": f"Bearer {HF_TOKEN}"
    }
    
    # Make the request
    response = requests.post(SUMMARIZATION_ENDPOINT, headers=chat_headers, json=payload)
    response.raise_for_status()
    
    # Parse the response
    result = response.json()
    return result["choices"][0]["message"]["content"]


def process_meeting_audio(audio_file):
    """Main processing function that handles the complete workflow"""
    if audio_file is None:
        return "Please upload an audio file.", ""
    
    try:
        # Step 1: Transcribe the audio
        transcript = transcribe_audio(audio_file)
        
        # Step 2: Generate summary from transcript
        summary = generate_summary(transcript)
        
        return transcript, summary
    
    except Exception as e:
        return f"Error processing audio: {str(e)}", ""


# Create Gradio interface
app = gr.Interface(
    fn=process_meeting_audio,
    inputs=gr.Audio(label="Upload Meeting Audio", type="filepath"),
    outputs=[
        gr.Textbox(label="Full Transcript", lines=10),
        gr.Textbox(label="Meeting Summary", lines=8),
    ],
    title="🎤 AI Meeting Notes",
    description="Upload audio to get instant transcripts and summaries.",
)

if __name__ == "__main__":
    app.launch()
```

</details>

![app](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/transcriptions/app.png)

## Deploy your transcription app

Now, let's deploy it to Hugging Face Spaces so everyone can use it!

1. **Create a new Space**: Go to [huggingface.co/new-space](https://huggingface.co/new-space)
2. **Choose Gradio SDK** and make it public
3. **Upload your files**: Upload `app.py` and any requirements
4. **Add your token**: In Space settings, add `HF_TOKEN` as a secret
5. **Configure hardware**: Consider GPU for faster processing
6. **Launch**: Your app will be live at `https://huggingface.co/spaces/your-username/your-space-name`

Your transcription app is now ready to handle meeting notes, interviews, podcasts, and any other audio content that needs to be transcribed and summarized!

## Next steps

Great work! You've now built a complete transcription application with intelligent summarization.

Here are some ways to extend your transcription app:

- **Multi-language support**: Add language detection and support for multiple languages
- **Speaker identification**: Use a model from the hub with speaker diarization capabilities.
- **Custom prompts**: Allow users to customize the summary format and style
- **Implement Text-to-Speech**: Use a model from the hub to convert your summary to another audio file!



<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/tutorials/transcription.md" />

### Build and deploy your own chat application
https://huggingface.co/docs/inference-endpoints/tutorials/chat_bot.md

# Build and deploy your own chat application

This tutorial will guide you from end to end on how to deploy your own chat application using Hugging Face Inference Endpoints. We will use Gradio to create a chat interface and an OpenAI client to connect to the Inference Endpoint.

<Tip>

This Tutorial uses Python, but your client can be any language that can make HTTP requests. The model and engine you deploy on Inference Endpoints uses the **OpenAI Chat Completions format**, so you can use any [OpenAI client](https://platform.openai.com/docs/libraries) to connect to them, in languages like JavaScript, Java, and Go.

</Tip>

## Create your Inference Endpoint

First, we need to create an Inference Endpoint for a model that can chat. 

Start by navigating to the Inference Endpoints UI, and once you have logged in you should see a button for creating a new Inference
Endpoint. Click the "New" button.

![new-button](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/1-new-button.png)

From there you'll be directed to the catalog. The Model Catalog consists of popular models which have tuned configurations to work just as one-click
deploys. You can filter by name, task, price of the hardware and much more.

![catalog](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/quick_start/2-catalog.png)

In this example let's deploy the [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) model. You can find
it by searching for `qwen3 1.7b` in the search field and deploy it by clicking the card.

![qwen](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/qwen-search.png)

Next we'll choose which hardware and deployment settings we'll go for. Since this is a catalog model, all of the pre-selected options are very good
defaults. So in this case we don't need to change anything. In case you want a deeper dive on what the different settings mean you can check out
the [configuration guide](./guides/configuration).

For this model the Nvidia L4 is the recommended choice. It will be perfect for our testing. Performant but still reasonably priced. Also note that by
default the endpoint will scale down to zero, meaning it will become idle after 1h of inactivity.

Now all you need to do is click click "Create Endpoint" 🚀

![config](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/config.png)

Now our Inference Endpoint is initializing, which usually takes about 3-5 minutes. If you want to can allow browser notifications which will give you a
ping once the endpoint reaches a running state.

![init](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/init.png)

## Test your Inference Endpoint in the browser

Now that we've created our Inference Endpoint, we can test it in the playground section.

![playground](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/playground.png)

You can use the model through a chat interface or copy code snippets to use it in your own application. 

## Get your Inference Endpoint details

We need to grab details of our Inference Endpoint, which we can find in the Endpoint's [Overview](https://endpoints.huggingface.co/). We will need the following details:

- The base URL of the endpoint plus the version of the OpenAI API (e.g. `https://<id>.<region>.<cloud>.endpoints.huggingface.cloud/v1/`)
- The name of the endpoint to use (e.g. `qwen3-1-7b-xll`)
- The token to use for authentication (e.g. `hf_<token>`)

![endpoint-details](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/endpoint-page.png)

We can find the token in your [account settings](https://huggingface.co/settings/tokens) which is accessible from the top dropdown and clicking on your account name.

## Deploy in a few lines of code

The easiest way to deploy a chat application with [Gradio](https://gradio.app/) is to use the convenient `load_chat` method. This abstracts everything away and you can have a working chat application quickly.

```python
import os

import gradio as gr

gr.load_chat(
    base_url="<endpoint-url>/v1/", # Replace with your endpoint URL + version
    model="endpoint-name", # Replace with your endpoint name
    token=os.getenv("HF_TOKEN"), # Replace with your token
).launch()
```

The `load_chat` method won't cater for your production needs, but it's a great way to get started and test your application. 


## Build your own custom chat application

If you want more control over your chat application, you can build your own custom chat interface with Gradio. This gives you more flexibility to customize the behavior, add features, and handle errors.

Choose your preferred method for connecting to Inference Endpoints:

<hfoptions id="chat-implementation">
<hfoption id="hf-client">

**Using Hugging Face InferenceClient**

First, install the required dependencies:

```bash
pip install gradio huggingface-hub
```

The Hugging Face InferenceClient provides a clean interface that's compatible with the OpenAI API format:

```python
import os
import gradio as gr
from huggingface_hub import InferenceClient

# Initialize the Hugging Face InferenceClient
client = InferenceClient(
    base_url="<endpoint-url>/v1/",  # Replace with your endpoint URL
    token=os.getenv("HF_TOKEN")  # Use environment variable for security
)

def chat_with_hf_client(message, history):
    # Convert Gradio history to messages format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    
    # Add the current message
    messages.append({"role": "user", "content": message})
    
    # Create chat completion
    chat_completion = client.chat.completions.create(
        model="endpoint-name",  # Use the name of your endpoint (i.e. qwen3-1.7b-instruct-xxxx)
        messages=messages,
        max_tokens=150,
        temperature=0.7,
    )
    
    # Return the response
    return chat_completion.choices[0].message.content

# Create the Gradio interface
demo = gr.ChatInterface(
    fn=chat_with_hf_client,
    type="messages",
    title="Custom Chat with Inference Endpoints",
    examples=["What is deep learning?", "Explain neural networks", "How does AI work?"]
)

if __name__ == "__main__":
    demo.launch()
```

</hfoption>
<hfoption id="openai-client">

**Using OpenAI Client**

First, install the required dependencies:
```bash
pip install gradio openai
```

Here's a basic chat function using the OpenAI client:

```python
import os
import gradio as gr
from openai import OpenAI

# Initialize the OpenAI client with your Inference Endpoint
client = OpenAI(
    base_url="<endpoint-url>/v1/",  # Replace with your endpoint URL
    api_key=os.getenv("HF_TOKEN")  # Use environment variable for security
)

def chat_with_openai(message, history):

    # Convert Gradio history to OpenAI format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    
    # Add the current message
    messages.append({"role": "user", "content": message})
    
    # Create chat completion
    chat_completion = client.chat.completions.create(
        model="endpoint-name",  # Use the name of your endpoint (i.e. qwen3-1.7b-xxxx)
        messages=messages,
        max_tokens=150,
        temperature=0.7,
    )
    
    # return the response
    return chat_completion.choices[0].message.content
                

# Create the Gradio interface
demo = gr.ChatInterface(
    fn=chat_with_openai,
    type="messages",
    title="Custom Chat with Inference Endpoints",
    examples=["What is deep learning?", "Explain neural networks", "How does AI work?"]
)

if __name__ == "__main__":
    demo.launch()
```

</hfoption>
<hfoption id="requests">

**Using Requests Library**

First, install the required dependencies:
```bash
pip install gradio requests
```

Here's a basic chat function using the requests library with the Messages API:

```python
import os
import gradio as gr
import requests

# Configure your Inference Endpoint
API_URL = "<endpoint-url>/v1/chat/completions"  # Use the chat completions endpoint

headers = {
    "Accept": "application/json",
    "Content-Type": "application/json",
    "Authorization": f"Bearer {os.getenv('HF_TOKEN')}"  # Use environment variable for security
}

def chat_with_requests(message, history):
    # Convert Gradio history to messages format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    
    # Add the current message
    messages.append({"role": "user", "content": message})
    
    # Prepare the payload using the Messages API format
    payload = {
        "model": "endpoint-name",  # Use the name of your endpoint (i.e. qwen3-1.7b-xxxx)
        "messages": messages,
        "max_tokens": 150,
        "temperature": 0.7,
        "stream": False
    }
    
    # Make the request
    response = requests.post(API_URL, headers=headers, json=payload)
    response.raise_for_status()
    
    # Parse the response
    result = response.json()
    return result["choices"][0]["message"]["content"]

# Create the Gradio interface
demo = gr.ChatInterface(
    fn=chat_with_requests,
    type="messages",
    title="Custom Chat with Inference Endpoints",
    examples=["What is deep learning?", "Explain neural networks", "How does AI work?"]
)

if __name__ == "__main__":
    demo.launch()
```

</hfoption>
</hfoptions>



## Adding Streaming Support

For a better user experience, you can implement streaming responses. This will require us to handle the messages and `yield` them to the client.

Here's how to add streaming to each client:

<hfoptions id="streaming-implementation">
<hfoption id="hf-client">

### Hugging Face InferenceClient Streaming

The Hugging Face InferenceClient supports streaming similar to the OpenAI client:

```python
import os
import gradio as gr
from huggingface_hub import InferenceClient

client = InferenceClient(
    base_url="<endpoint-url>/v1/",
    token=os.getenv("HF_TOKEN")
)

def chat_with_hf_streaming(message, history):
    # Convert history to messages format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    messages.append({"role": "user", "content": message})
    
    # Create streaming chat completion
    chat_completion = client.chat.completions.create(
        model="endpoint-name",
        messages=messages,
        max_tokens=150,
        temperature=0.7,
        stream=True  # Enable streaming
    )
    
    response = ""
    for chunk in chat_completion:
        if chunk.choices[0].delta.content:
            response += chunk.choices[0].delta.content
            yield response  # Yield partial response for streaming

# Create streaming interface
demo = gr.ChatInterface(
    fn=chat_with_hf_streaming,
    type="messages",
    title="Streaming Chat with Inference Endpoints"
)

demo.launch()
```

</hfoption>
<hfoption id="openai-client">

### OpenAI Client Streaming

To use streaming with the OpenAI client, we need to set `stream=True` and yield the response as it builds:

```python
import os
import gradio as gr
from openai import OpenAI

client = OpenAI(base_url="<endpoint-url>/v1/", api_key=os.getenv("HF_TOKEN"))


def chat_with_streaming(message, history):
    # Convert history to OpenAI format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    messages.append({"role": "user", "content": message})


chat_completion = client.chat.completions.create(
    model="endpoint-name", # Use the name of your endpoint (i.e. qwen3-1.7b-xxxx)
    messages=messages,
    max_tokens=150,
    temperature=0.7,
    stream=True,  # Enable streaming
)

response = ""
for chunk in chat_completion:
    if chunk.choices[0].delta.content:
        response += chunk.choices[0].delta.content
        yield response  # Yield partial response for streaming


# Create streaming interface
demo = gr.ChatInterface(
    fn=chat_with_streaming,
    type="messages",
    title="Streaming Chat with Inference Endpoints",
)

demo.launch()

```

</hfoption>
<hfoption id="requests">

### Requests Library Streaming

For requests, you can use the streaming approach with the Messages API by setting `stream=True`:

```python
import os
import gradio as gr
import requests
import json

API_URL = "https://<id>.<region>.<cloud>.endpoints.huggingface.cloud/v1/chat/completions"

headers = {
    "Accept": "application/json",
    "Content-Type": "application/json",
    "Authorization": f"Bearer {os.getenv('HF_TOKEN')}",
}


def chat_with_requests_streaming(message, history):
    # Convert Gradio history to messages format
    messages = [{"role": msg["role"], "content": msg["content"]} for msg in history]
    messages.append({"role": "user", "content": message})

    # Prepare payload using Messages API format
    payload = {
        "model": "smollm2-1-7b-instruct-ljn",
        "messages": messages,
        "max_tokens": 150,
        "temperature": 0.7,
        "stream": True,  # Enable streaming
    }

    response = requests.post(API_URL, headers=headers, json=payload, stream=True)

    content = ""

    for line in response.iter_lines():
        line = line.decode("utf-8")

        if line.startswith("data: ") and not line.endswith("[DONE]"):
            data = json.loads(line[len("data: ") :])
            chunk = data["choices"][0]["delta"].get("content", "")
            content += chunk
            yield content


# Create streaming interface
demo = gr.ChatInterface(
    fn=chat_with_requests_streaming,
    type="messages",
    title="Streaming Chat with Inference Endpoints",
)

demo.launch()

```

</hfoption>

</hfoptions>

## Deploy your chat application

Our app will run on port 7860 and look like this:

![Gradio app](https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/tutorials/chatbot/app.png)

To deploy, we'll need to create a new Space and upload our files.

1. **Create a new Space**: Go to [huggingface.co/new-space](https://huggingface.co/new-space)
2. **Choose Gradio SDK** and make it public
3. **Upload your files**: Upload `app.py`
4. **Add your token**: In Space settings, add `HF_TOKEN` as a secret (get it from [your settings](https://huggingface.co/settings/tokens))
5. **Launch**: Your app will be live at `https://huggingface.co/spaces/your-username/your-space-name`

> **Note**: While we used CLI authentication locally, Spaces requires the token as a secret for the deployment environment.

## Next steps

That's it! You now have a chat application running on Hugging Face Spaces powered by Inference Endpoints.

Why not level up and try out the [next guide](./transcription) to build a Text-to-Speech application?


<EditOnGithub source="https://github.com/huggingface/hf-endpoints-documentation/blob/main/docs/source/tutorials/chat_bot.md" />
