# Sagemaker

## Docs

- [Hugging Face on AWS](https://huggingface.co/docs/sagemaker/index.md)
- [Inference Toolkit API](https://huggingface.co/docs/sagemaker/reference/inference-toolkit.md)
- [Resources](https://huggingface.co/docs/sagemaker/reference/resources.md)
- [How to deploy Embedding Models to Amazon SageMaker using new Hugging Face Embedding DLC](https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-deploy-embedding-models.md)
- [Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker](https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-evaluate-llm-lighteval.md)
- [Fine-tune and deploy embedding models with Amazon SageMaker](https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-fine-tune-embedding-models.md)
- [Deploy Llama 3.3 70B on AWS Inferentia2](https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-deploy-llama-3-3-70b-inferentia2.md)
- [How to](https://huggingface.co/docs/sagemaker/tutorials/index.md)
- [Train and deploy a Hugging Face model on Amazon SageMaker with the SDK](https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart.md)
- [Deploy models to Amazon SageMaker](https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/deploy-sagemaker-sdk.md)
- [Run training on Amazon SageMaker](https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/training-sagemaker-sdk.md)
- [Quickstart - Deploy Hugging Face Models with SageMaker Jumpstart](https://huggingface.co/docs/sagemaker/tutorials/jumpstart/jumpstart-quickstart.md)
- [EC2, ECS and EKS Quickstart](https://huggingface.co/docs/sagemaker/tutorials/compute-services/compute-services-quickstart.md)
- [Quickstart — Using Hugging Face Models with Amazon Bedrock Marketplace](https://huggingface.co/docs/sagemaker/tutorials/bedrock/bedrock-quickstart.md)
- [Introduction](https://huggingface.co/docs/sagemaker/dlcs/introduction.md)
- [Available DLCs on AWS](https://huggingface.co/docs/sagemaker/dlcs/available.md)

### Hugging Face on AWS
https://huggingface.co/docs/sagemaker/index.md

# Hugging Face on AWS

![cover](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sagemaker/cover.png)

Hugging Face partners with Amazon Web Services (AWS) to democratize artificial intelligence (AI), enabling developers to seamlessly build, train, and deploy state-of-the-art machine learning models using AWS's robust cloud infrastructure. ​

This collaboration aims to offer developers access to an everyday growing catalog of pre-trained models and dataset from the Hugging Face Hub, using Hugging Face open-source libraries across a broad spectrum of AWS services and hardware platforms.

We build new experiences for developers to seamlessly train and deploy Hugging Face models whether they use AWS AI platforms such as Amazon SageMaker AI and AWS Bedrock, or AWS Compute services such as Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), and virtual servers on Amazon Elastic Compute Cloud (EC2).

We develop new tools to simplify the adoption of custom AI accelerators like AWS Inferentia and AWS Trainium, designed to enhance the performance and cost-efficiency of machine learning workloads.

By combining Hugging Face's open-source models and libraries with AWS's scalable and secure cloud services, developers can more easily and affordably incorporate advanced AI capabilities into their applications.

## Deploy models on AWS

Deploying Hugging Face models on AWS is streamlined through various services, each suited for different deployment scenarios. Here's how you can deploy your models using AWS and Hugging Face offerings.

You can deploy any Hugging Face Model on AWS with:
- [Amazon Sagemaker SDK](#deploy-with-sagemaker-sdk)
- [Amazon Sagemaker Jumpstart](#deploy-with-sagemaker-jumpstart)
- [AWS Bedrock](#deploy-with-aws-bedrock)
- [Hugging Face Inference Endpoints](#deploy-with-hugging-face-inference-endpoints)
- [ECS, EKS, and EC2](#deploy-with-ecs-eks-and-ec2)

### Deploy with Sagemaker SDK

Amazon SageMaker is a fully managed AWS service for building, training, and deploying machine learning models at scale. The SageMaker SDK simplifies interacting with SageMaker programmatically. Amazon SageMaker SDK provides a seamless integration specifically designed for Hugging Face models, simplifying the deployment process of managed endpoints. With this integration, you can quickly deploy pre-trained Hugging Face models or your own fine-tuned models directly into SageMaker-managed endpoints, significantly reducing setup complexity and time to production.

[Sagemaker SDK Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart)

### Deploy with Sagemaker Jumpstart

Amazon SageMaker JumpStart is a curated model catalog from which you can deploy a model with just a few clicks. We maintain a Hugging Face section in the catalog that will let you self-host the most famous open models in your VPC with performant default configurations, powered under the hood by [Hugging Face Deep Learning Catalogs (DLCs)](https://huggingface.co/docs/sagemaker/main/en/dlcs/introduction).

[Sagemaker Jumpstart Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/jumpstart/jumpstart-quickstart)

### Deploy with AWS Bedrock

Amazon Bedrock enables developers to easily build and scale generative AI applications through a single API. With Bedrock Marketplace, you can now combine the ease of use of SageMaker JumpStart with the fully managed infrastructure of Amazon Bedrock, including compatibility with high-level APIs such as Agents, Knowledge Bases, Guardrails and Model Evaluations.

[AWS Bedrock Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/bedrock/bedrock-quickstart)

### Deploy with Hugging Face Inference Endpoints

Hugging Face Inference Endpoints allow you to deploy models hosted directly by Hugging Face, fully managed and optimized for performance. It's ideal for quick deployment and scalable inference workloads.

[Hugging Face Inference Endpoints Quickstart](https://huggingface.co/docs/inference-endpoints/guides/create_endpoint).

### Deploy with ECS, EKS, and EC2

Hugging Face provides Inference Deep Learning Containers (DLCs) to AWS users, optimized environments preconfigured with Hugging Face libraries for inference, natively integrated in SageMaker SDK and JumpStart. However, the HF DLCs can also be used across other AWS services like ECS, EKS, and EC2.

AWS Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), and Elastic Compute Cloud (EC2) allow you to leverage DLCs directly.

[EC2, ECS and EKS Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/compute-services/compute-services-quickstart)

## Train models on AWS

Training Hugging Face models on AWS is streamlined through various services. Here's how you can fine-tune your models using AWS and Hugging Face offerings.

You can fine-tune any Hugging Face Model on AWS with:
- [Amazon Sagemaker SDK](#train-with-sagemaker-sdk)
- [ECS, EKS, and EC2](#train-with-ecs-eks-and-ec2)

### Train with Sagemaker SDK

Amazon SageMaker is a fully managed AWS service for building, training, and deploying machine learning models at scale. The SageMaker SDK simplifies interacting with SageMaker programmatically. Amazon SageMaker SDK provides a seamless integration specifically designed for Hugging Face models, simplifying the training job management. With this integration, you can quickly create your own fine-tuned models, significantly reducing setup complexity and time to production.

[Sagemaker SDK Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart)

### Train with ECS, EKS, and EC2

Hugging Face provides Training Deep Learning Containers (DLCs) to AWS users, optimized environments preconfigured with Hugging Face libraries for training, natively integrated in SageMaker SDK. However, the HF DLCs can also be used across other AWS services like ECS, EKS, and EC2.

AWS Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), and Elastic Compute Cloud (EC2) allow you to leverage DLCs directly.

[EC2, ECS and EKS Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/compute-services/compute-services-quickstart)

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/index.md" />

### Inference Toolkit API
https://huggingface.co/docs/sagemaker/reference/inference-toolkit.md

# Inference Toolkit API

## Supported tasks

The [Sagemaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit/tree/main) accepts inputs in the `inputs` key, and supports additional [`pipelines`](https://huggingface.co/docs/transformers/main_classes/pipelines) parameters in the `parameters` key. You can provide any of the supported `kwargs` from `pipelines` as `parameters`.

Tasks supported by the Inference Toolkit API include:

- **`text-classification`**
- **`sentiment-analysis`**
- **`token-classification`**
- **`feature-extraction`**
- **`fill-mask`**
- **`summarization`**
- **`translation_xx_to_yy`**
- **`text2text-generation`**
- **`text-generation`**
- **`audio-classificatin`**
- **`automatic-speech-recognition`**
- **`conversational`**
- **`image-classification`**
- **`image-segmentation`**
- **`object-detection`**
- **`table-question-answering`**
- **`zero-shot-classification`**
- **`zero-shot-image-classification`**


See the following request examples for some of the tasks:

**`text-classification`**

```json
{
  "inputs": "This sound track was beautiful! It paints the senery in your mind so well I would recommend it
  even to people who hate vid. game music!"
}
```

**`sentiment-analysis`**

```json
{
  "inputs": "Don't waste your time.  We had two different people come to our house to give us estimates for
a deck (one of them the OWNER).  Both times, we never heard from them.  Not a call, not the estimate, nothing."
}
```

**`token-classification`**

```json
{
  "inputs": "My name is Sylvain and I work at Hugging Face in Brooklyn."
}
```

**`question-answering`**

```json
{
  "inputs": {
    "question": "What is used for inference?",
    "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
  }
}
```

**`zero-shot-classification`**

```json
{
  "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!",
  "parameters": {
    "candidate_labels": ["refund", "legal", "faq"]
  }
}
```

**`table-question-answering`**

```json
{
  "inputs": {
    "query": "How many stars does the transformers repository have?",
    "table": {
      "Repository": ["Transformers", "Datasets", "Tokenizers"],
      "Stars": ["36542", "4512", "3934"],
      "Contributors": ["651", "77", "34"],
      "Programming language": ["Python", "Python", "Rust, Python and NodeJS"]
    }
  }
}
```

**`parameterized-request`**

```json
{
  "inputs": "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team.",
  "parameters": {
    "repetition_penalty": 4.0,
    "length_penalty": 1.5
  }
}
```

## Environment variables

The Inference Toolkit implements various additional environment variables to simplify deployment. A complete list of Hugging Face specific environment variables is shown below:

**`HF_TASK`**

`HF_TASK` defines the task for the 🤗 Transformers pipeline used . See [here](https://huggingface.co/docs/transformers/main_classes/pipelines) for a complete list of tasks.

```bash
HF_TASK="question-answering"
```

**`HF_MODEL_ID`**

`HF_MODEL_ID` defines the model ID which is automatically loaded from [hf.co/models](https://huggingface.co/models) when creating a SageMaker endpoint. All of the 🤗 Hub's 10,000+ models are available through this environment variable.

```bash
HF_MODEL_ID="distilbert-base-uncased-finetuned-sst-2-english"
```

**`HF_MODEL_REVISION`**

`HF_MODEL_REVISION` is an extension to `HF_MODEL_ID` and allows you to define or pin a model revision to make sure you always load the same model on your SageMaker endpoint.

```bash
HF_MODEL_REVISION="03b4d196c19d0a73c7e0322684e97db1ec397613"
```

**`HF_API_TOKEN`**

`HF_API_TOKEN` defines your Hugging Face authorization token. The `HF_API_TOKEN` is used as a HTTP bearer authorization for remote files like private models. You can find your token under [Settings](https://huggingface.co/settings/tokens) of your Hugging Face account.

```bash
HF_API_TOKEN="api_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
```


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/reference/inference-toolkit.md" />

### Resources
https://huggingface.co/docs/sagemaker/reference/resources.md

# Resources

Take a look at our published blog posts, videos, tutorials and examples along with external relevant documentation for additional help and more context about Hugging Face on AWS.

Feel free to reach out on our [community forum](https://discuss.huggingface.co/c/sagemaker/17) if you have any questions.

## Tutorials

- [All tutorials](https://huggingface.co/docs/sagemaker/main/en/tutorials/introduction)

## Examples

- [All examples](https://huggingface.co/docs/sagemaker/main/en/examples/introduction)

## Hugging Face Blogs

- [Deploy Hugging Face models easily with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker)
- [Hugging Face and AWS partner to make AI more accessible](https://huggingface.co/blog/aws-partnership)
- [Introducing the Hugging Face LLM Inference Container for Amazon SageMaker](https://huggingface.co/blog/sagemaker-huggingface-llm)
- [Hugging Face Text Generation Inference available for AWS Inferentia2](https://huggingface.co/blog/text-generation-inference-on-inferentia2)
- [Subscribe to Enterprise Hub with your AWS Account](https://huggingface.co/blog/enterprise-hub-aws-marketplace)
- [Deploy models on AWS Inferentia2 from Hugging Face](https://huggingface.co/blog/inferentia-inference-endpoints)
- [Introducing the Hugging Face Embedding Container for Amazon SageMaker](https://huggingface.co/blog/sagemaker-huggingface-embedding)
- [Use Hugging Face models with Amazon Bedrock](https://huggingface.co/blog/bedrock-marketplace)

## AWS Blogs

- [AWS: Embracing natural language processing with Hugging Face](https://aws.amazon.com/de/blogs/opensource/embracing-natural-language-processing-with-hugging-face/)
- [AWS and Hugging Face collaborate to simplify and accelerate adoption of natural language processing models](https://aws.amazon.com/blogs/machine-learning/aws-and-hugging-face-collaborate-to-simplify-and-accelerate-adoption-of-natural-language-processing-models/)
- [AWS and Hugging Face collaborate to make generative AI more accessible and cost efficient](https://aws.amazon.com/blogs/machine-learning/aws-and-hugging-face-collaborate-to-make-generative-ai-more-accessible-and-cost-efficient/)
- [Use Amazon Bedrock tooling with Amazon SageMaker JumpStart models](https://aws.amazon.com/blogs/machine-learning/use-amazon-bedrock-tooling-with-amazon-sagemaker-jumpstart-models/)
- [Deploy RAG applications on Amazon SageMaker JumpStart using FAISS](https://aws.amazon.com/blogs/machine-learning/deploy-rag-applications-on-amazon-sagemaker-jumpstart-using-faiss/)
- [Fine-tune and host SDXL models cost-effectively with AWS Inferentia2](https://aws.amazon.com/blogs/machine-learning/fine-tune-and-host-sdxl-models-cost-effectively-with-aws-inferentia2/)
- [Achieve ~2x speed-up in LLM inference with Medusa-1 on Amazon SageMaker AI](https://aws.amazon.com/blogs/machine-learning/achieve-2x-speed-up-in-llm-inference-with-medusa-1-on-amazon-sagemaker-ai/)
- [Optimize hosting DeepSeek-R1 distilled models with Hugging Face TGI on Amazon SageMaker AI](https://aws.amazon.com/blogs/machine-learning/optimize-hosting-deepseek-r1-distilled-models-with-hugging-face-tgi-on-amazon-sagemaker-ai/)

## Videos

- [Walkthrough: End-to-End Text Classification](https://youtu.be/ok3hetb42gU)
- [Working with Hugging Face models on Amazon SageMaker](https://youtu.be/leyrCgLAGjMn)
- [Deploy a Hugging Face Transformers Model from S3 to Amazon SageMaker](https://youtu.be/pfBGgSGnYLs)
- [Deploy a Hugging Face Transformers Model from the Model Hub to Amazon SageMaker](https://youtu.be/l9QZuazbzWM)
- [Training with Hugging Face on Amazon SageMaker](https://www.youtube.com/watch?v=BqQ14SZ5tos)
- [Hosting with Hugging Face on Amazon SageMaker](https://www.youtube.com/watch?v=oVIvXfeunv8)
- [Introduction to Hugging Face on Amazon SageMaker](https://www.youtu.be/watch?v=80ix-IyNnQI)

## External Documentation

- [Hugging Face on AWS](https://aws.amazon.com/ai/hugging-face/)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
- [LLM Hosting Container](https://github.com/awslabs/llm-hosting-container)


## Workshops

- [Enterprise-Scale NLP with Hugging Face & Amazon SageMaker](https://github.com/philschmid/huggingface-sagemaker-workshop-series/tree/main)


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/reference/resources.md" />

### How to deploy Embedding Models to Amazon SageMaker using new Hugging Face Embedding DLC
https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-deploy-embedding-models.md

# How to deploy Embedding Models to Amazon SageMaker using new Hugging Face Embedding DLC

This is an example on how to deploy the open Embedding Models, like [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l), [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) or [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) to Amazon SageMaker for inference using the new Hugging Face Embedding Inference Container. We will deploy the [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) one of the best open Embedding Models for retrieval and ranking on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard). 

The example covers:
1. [Setup development environment](#1-setup-development-environment)
2. [Retrieve the new Hugging Face Embedding Container](#2-retrieve-the-new-hugging-face-embedding-container)
3. [Deploy Snowflake Arctic to Amazon SageMaker](#3-deploy-snowflake-arctic-to-amazon-sagemaker)
4. [Run and evaluate Inference performance](#4-run-and-evaluate-inference-performance)
5. [Delete model and endpoint](#5-delete-model-and-endpoint)

## What is Hugging Face Embedding DLC?

The Hugging Face Embedding DLC is a new purpose-built Inference Container to easily deploy Embedding Models in a secure and managed environment. The DLC is powered by [Text Embedding Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) a blazing fast and memory efficient solution for deploying and serving Embedding Models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. TEI implements many features such as:

* No model graph compilation step
* Small docker images and fast boot times
* Token based dynamic batching
* Optimized transformers code for inference using Flash Attention, Candle and cuBLASLt
* Safetensors weight loading
* Production ready (distributed tracing with Open Telemetry, Prometheus metrics)

TEI supports the following model architectures
* BERT/CamemBERT, e.g. [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) or [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m)
* RoBERTa, [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) 
* XLM-RoBERTa, e.g. [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1)
* NomicBert, e.g. [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)
* JinaBert, e.g. [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)

Lets get started!


## 1. Setup development environment

We are going to use the `sagemaker` python SDK to deploy Snowflake Arctic to Amazon SageMaker. We need to make sure to have an AWS account configured and the `sagemaker` python SDK installed. 

```python
!pip install "sagemaker>=2.221.1" --upgrade --quiet
```

If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.


```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client('iam')
    role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Retrieve the new Hugging Face Embedding Container

Compared to deploying regular Hugging Face models we first need to retrieve the container uri and provide it to our `HuggingFaceModel` model class with a `image_uri` pointing to the image. To retrieve the new Hugging Face Embedding Container in Amazon SageMaker, we can use the `get_huggingface_llm_image_uri` method provided by the `sagemaker` SDK. This method allows us to retrieve the URI for the desired Hugging Face Embedding Container. Important to note is that TEI has 2 different versions for cpu and gpu, so we create a helper function to retrieve the correct image uri based on the instance type. 

```python
from sagemaker.huggingface import get_huggingface_llm_image_uri

# retrieve the image uri based on instance type
def get_image_uri(instance_type):
  key = "huggingface-tei" if instance_type.startswith("ml.g") or instance_type.startswith("ml.p") else "huggingface-tei-cpu"
  return get_huggingface_llm_image_uri(key, version="1.4.0")
```

## 3. Deploy Snowflake Arctic to Amazon SageMaker

To deploy  [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) to Amazon SageMaker we create a `HuggingFaceModel` model class and define our endpoint configuration including the `HF_MODEL_ID`, `instance_type` etc. We will use a `c6i.2xlarge` instance type, which has 4 Intel Ice-Lake vCPUs, 8GB of memory and costs around $0.204 per hour. 

```python
import json
from sagemaker.huggingface import HuggingFaceModel

# sagemaker config
instance_type = "ml.g5.xlarge"

# Define Model and Endpoint configuration parameter
config = {
  'HF_MODEL_ID': "Snowflake/snowflake-arctic-embed-m", # model_id from hf.co/models
}

# create HuggingFaceModel with the image uri
emb_model = HuggingFaceModel(
  role=role,
  image_uri=get_image_uri(instance_type),
  env=config
)
```

After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.c6i.2xlarge` instance type.

```python
# Deploy model to an endpoint
# https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.model.Model.deploy
emb = emb_model.deploy(
  initial_instance_count=1,
  instance_type=instance_type,
)
```

SageMaker will now create our endpoint and deploy the model to it. This can takes  ~5 minutes. 

## 4. Run and evaluate Inference performance

After our endpoint is deployed we can run inference on it. We will use the `predict` method from the `predictor` to run inference on our endpoint.


```python
data = {
  "inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
 
res = emb.predict(data=data)
 
 
# print some results
print(f"length of embeddings: {len(res[0])}")
print(f"first 10 elements of embeddings: {res[0][:10]}")
```

Awesome we can now generate embeddings with our model, Lets test the performance of our model.

We will send 3,900 requests to our endpoint use threading with 10 concurrent threads. We will measure the average latency and throughput of our endpoint. We are going to sent an input of 256 tokens to have a total of ~1 Million tokens. We decided to use 256 tokens as input length to find the balance between shorter and longer inputs.

Note: When running the load test, the requests are sent from europe and the endpoint is deployed in us-east-1. This adds a network overhead to it.

```python
import threading
import time
number_of_threads = 10
number_of_requests = int(3900 // number_of_threads)
print(f"number of threads: {number_of_threads}")
print(f"number of requests per thread: {number_of_requests}")
 
def send_rquests():
    for _ in range(number_of_requests):
        # input counted at https://huggingface.co/spaces/Xenova/the-tokenizer-playground for 100 tokens
        emb.predict(data={"inputs": "Hugging Face is a company and a popular platform in the field of natural language processing (NLP) and machine learning. They are known for their contributions to the development of state-of-the-art models for various NLP tasks and for providing a platform that facilitates the sharing and usage of pre-trained models. One of the key offerings from Hugging Face is the Transformers library, which is an open-source library for working with a variety of pre-trained transformer models, including those for text generation, translation, summarization, question answering, and more. The library is widely used in the research and development of NLP applications and is supported by a large and active community. Hugging Face also provides a model hub where users can discover, share, and download pre-trained models. Additionally, they offer tools and frameworks to make it easier for developers to integrate and use these models in their own projects. The company has played a significant role in advancing the field of NLP and making cutting-edge models more accessible to the broader community. Hugging Face also provides a model hub where users can discover, share, and download pre-trained models. Additionally, they offer tools and frameworks to make it easier for developers and ma"})
 
# Create multiple threads
threads = [threading.Thread(target=send_rquests) for _ in range(number_of_threads) ]
# start all threads
start = time.time()
[t.start() for t in threads]
# wait for all threads to finish
[t.join() for t in threads]
print(f"total time: {round(time.time() - start)} seconds")
```

Sending 3,900 requests or embedding 1 million tokens took around 841 seconds. This means we can run around ~5 requests per second. But keep in mind that includes the network latency from europe to us-east-1. When we inspect the latency of the endpoint through cloudwatch we can see that latency for our Embeddings model is 2s at 10 concurrent requests. This is very impressive for a small & old CPU instance, which cost ~150$ per month. You can deploy the model to a GPU instance to get faster inference times.

_Note: We ran the same test on a `ml.g5.xlarge` with 1x NVIDIA A10G GPU. Embedding 1 million tokens took around 30 seconds. This means we can run around ~130 requests per second. The latency for the endpoint is 4ms at 10 concurrent requests. The `ml.g5.xlarge` costs around $1.408 per hour on Amazon SageMaker._

GPU instance are much faster than CPU instances, but they are also more expensive. If you want to bulk process embeddings, you can use a GPU instance. If you want to run a small endpoint with low costs, you can use a CPU instance. We plan to work on a dedicated benchmark for the Hugging Face Embedding DLC in the future.

```python
print(f"https://console.aws.amazon.com/cloudwatch/home?region={sess.boto_region_name}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'{emb.endpoint_name}~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{sess.boto_region_name}~start~'-PT5M~end~'P0D~stat~'Average~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{emb.endpoint_name}")
```

![cw](https://raw.githubusercontent.com/huggingface/hub-docs/refs/heads/main/docs/sagemaker/notebooks/sagemaker-sdk/deploy-embedding-models/assets/cw.png)

## 5. Delete model and endpoint

To clean up, we can delete the model and endpoint

```python
emb.delete_model()
emb.delete_endpoint()
```

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/hub-docs/tree/main/notebooks/sagemaker-sdk/deploy-embedding-models/sagemaker-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/examples/sagemaker-sdk-deploy-embedding-models.mdx" />

### Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker
https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-evaluate-llm-lighteval.md

# Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker

In this sagemaker example, we are going to learn how to evaluate LLMs using Hugging Face [lighteval](https://github.com/huggingface/lighteval/tree/main).  LightEval is a lightweight LLM evaluation suite that powers [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). 


Evaluating LLMs is crucial for understanding their capabilities and limitations, yet it poses significant challenges due to their complex and opaque nature. LightEval facilitates this evaluation process by enabling LLMs to be assessed on acamedic benchmarks like MMLU or IFEval, providing a structured approach to gauge their performance across diverse tasks.


In Detail you will learn how to:
1. Setup Development Environment
2. Prepare the evaluation configuraiton
3. Evaluate Zephyr 7B on TruthfulQA on Amazon SageMaker


```python
!pip install sagemaker --upgrade --quiet
```

If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.



```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client('iam')
    role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Prepare the evaluation configuraiton

[LightEval](https://github.com/huggingface/lighteval/tree/main) includes script to evaluate LLMs on common benchmarks like MMLU, Truthfulqa, IFEval, and more. It is used to evaluate models on the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). lighteval isy built on top of the great [Eleuther AI Harness](https://github.com/EleutherAI/lm-evaluation-harness) with some additional features and improvements. 

You can find all available benchmarks [here](https://github.com/huggingface/lighteval/blob/main/examples/tasks/all_tasks.txt). 

We are going to use Amazon SageMaker Managed Training to evaluate the model. Therefore we will leverage the script available in [lighteval](https://github.com/huggingface/lighteval/blob/main/run_evals_accelerate.py). The Hugging Face DLC is not having lighteval installed. This means need to provide a `requirements.txt` file to install the required dependencies.

First lets load the `run_evals_accelerate.py` script and create a `requirements.txt` file with the required dependencies.

```python
import os 
import requests as r

lighteval_version = "0.2.0"

# create scripts directory if not exists
os.makedirs("scripts", exist_ok=True)

# load custom scripts from git
raw_github_url = f"https://raw.githubusercontent.com/huggingface/lighteval/v{lighteval_version}/run_evals_accelerate.py"
res = r.get(raw_github_url)
with open("scripts/run_evals_accelerate.py", "w") as f:
    f.write(res.text)
    
# write requirements.txt    
with open("scripts/requirements.txt", "w") as f:
    f.write(f"lighteval=={lighteval_version}")
```

In lighteval, the evaluation is done by running the `run_evals_accelerate.py` script. The script takes a `task` argument which is defined as `suite|task|num_few_shot|{0 or 1 to automatically reduce num_few_shot if prompt is too long}`. Alternatively, you can also provide a path to a txt file with the tasks you want to evaluate the model on, which we are going to do. This makes it easier for you to extend the evaluation to other benchmarks.

We are going to evaluate the model on the Truthfulqa benchmark with 0 few-shot examples. [TruthfulQA](https://paperswithcode.com/dataset/truthfulqa) is a benchmark designed to measure whether a language model generates truthful answers to questions, encompassing 817 questions across 38 categories including health, law, finance, and politics​​.

```python
with open("scripts/tasks.txt", "w") as f:
    f.write(f"lighteval|truthfulqa:mc|0|0")
```

To evaluate a model on all the benchmarks of the Open LLM Leaderboard you can copy this [file](https://github.com/huggingface/lighteval/blob/v0.2.0/tasks_examples/open_llm_leaderboard_tasks.txt)

## 3. Evaluate Zephyr 7B on TruthfulQA on Amazon SageMaker

In this example we are going to evaluate the [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the MMLU benchmark, which is part of the Open LLM Leaderboard. 

In addition to the `task` argument we need to define: 
* `model_args`: Hugging Face Model ID or path, defined as `pretrained=HuggingFaceH4/zephyr-7b-beta`
* `model_dtype`: The model data type, defined as `bfloat16`, `float16` or `float32`
* `output_dir`: The directory where the evaluation results will be saved, e.g. `/opt/ml/model` 

Lightevals can also evaluat peft models or use `chat_templates` you find more about it [here](https://github.com/huggingface/lighteval/blob/v0.2.0/run_evals_accelerate.py). 

```python
from sagemaker.huggingface import HuggingFace

# hyperparameters, which are passed into the training job
hyperparameters = {
  'model_args': "pretrained=HuggingFaceH4/zephyr-7b-beta", # Hugging Face Model ID
  'task': 'tasks.txt',  # 'lighteval|truthfulqa:mc|0|0', 
  'model_dtype': 'bfloat16', # Torch dtype to load model weights
  'output_dir': '/opt/ml/model' # Directory, which sagemaker uploads to s3 after training
}

# create the Estimator
huggingface_estimator = HuggingFace(
    entry_point          = 'run_evals_accelerate.py',      # train script
    source_dir           = 'scripts',         # directory which includes all the files needed for training
    instance_type        = 'ml.g5.4xlarge',   # instances type used for the training job
    instance_count       = 1,                 # the number of instances used for training
    base_job_name        = "lighteval",          # the name of the training job
    role                 = role,              # Iam role used in training job to access AWS ressources, e.g. S3
    volume_size          = 300,               # the size of the EBS volume in GB
    transformers_version = '4.36',            # the transformers version used in the training job
    pytorch_version      = '2.1',            # the pytorch_version version used in the training job
    py_version           = 'py310',            # the python version used in the training job
    hyperparameters      =  hyperparameters,
    environment          = { 
                            "HUGGINGFACE_HUB_CACHE": "/tmp/.cache",
                            # "HF_TOKEN": "REPALCE_WITH_YOUR_TOKEN" # needed for private models
                            }, # set env variable to cache models in /tmp
)
```

We can now start our evaluation job, with the `.fit()`.

```python
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit()
```

After the evaluation job is finished, we can download the evaluation results from the S3 bucket. Lighteval will save the results and generations in the `output_dir`. The results are savedas json and include detailed information about each task and the model's performance. The results are available in the `results` key. 

```python
import tarfile
import json
import io
import os
from sagemaker.s3 import S3Downloader


# download results from s3
results_tar = S3Downloader.read_bytes(huggingface_estimator.model_data)
model_id = hyperparameters["model_args"].split("=")[1]
result={}

# Use tarfile to open the tar content directly from bytes
with tarfile.open(fileobj=io.BytesIO(results_tar), mode="r:gz") as tar:
    # Iterate over items in tar archive to find your json file by its path
    for member in tar.getmembers():
        # get path of results based on model id used to evaluate
        if os.path.join("details", model_id) in member.name and member.name.endswith('.json'):
            # Extract the file content
            f = tar.extractfile(member)
            if f is not None:
                content = f.read()
                result = json.loads(content)
                break
            
# print results
print(result["results"])
# {'lighteval|truthfulqa:mc|0': {'truthfulqa_mc1': 0.40636474908200737, 'truthfulqa_mc1_stderr': 0.017193835812093897, 'truthfulqa_mc2': 0.5747003398184238, 'truthfulqa_mc2_stderr': 0.015742356478301463}}
```

In our test we achieved a `mc1` score of 40.6% and an `mc2` score of 57.47%. The `mc2` is the score used in the Open LLM Leaderboard. Zephyr 7B achieved a `mc2` score of 57.47% on the TruthfulQA benchmark, which is identical to the score on the Open LLM Leaderboard. 
The evaluation on Truthfulqa took `999 seconds`. The ml.g5.4xlarge instance we used costs `$2.03 per hour` for on-demand usage. As a result, the total cost for evaluating Zephyr 7B on Truthfulqa was `$0.56`.



---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/hub-docs/tree/main/notebooks/sagemaker-sdk/evaluate-llm-lighteval/sagemaker-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/examples/sagemaker-sdk-evaluate-llm-lighteval.mdx" />

### Fine-tune and deploy embedding models with Amazon SageMaker
https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-fine-tune-embedding-models.md

# Fine-tune and deploy embedding models with Amazon SageMaker

Embedding models are crucial for successful RAG applications, but they're often trained on general knowledge, which limits their effectiveness for company or domain specific adoption. Customizing embedding for your domain specific data can significantly boost the retrieval performance of your RAG Application. With the new release of [Sentence Transformers 3](https://huggingface.co/blog/train-sentence-transformers) and the [Hugging Face Embedding Container](https://huggingface.co/blog/sagemaker-huggingface-embedding), it's easier than ever to fine-tune and deploy embedding models.

In this example, we'll show you how to fine-tune and deploy a custom embedding model on Amazon SageMaker using the new Hugging Face Embedding Container. We'll use the [Sentence Transformers 3](https://huggingface.co/blog/train-sentence-transformers) library to fine-tune a model on a custom dataset and deploy it on Amazon SageMaker for inference. We will fine-tune [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) for financial RAG applications using a synthetic dataset from the [2023_10 NVIDIA SEC Filing](https://stocklight.com/stocks/us/nasdaq-nvda/nvidia/annual-reports/nasdaq-nvda-2023-10K-23668751.pdf). 

1. [Setup development environment](#2-setup-development-environment)
2. [Create and prepare the dataset](#3-create-and-prepare-the-dataset)
3. [Fine-tune Embedding model on Amazon SageMaker](#3-fine-tune-embedding-model-on-amazon-sagemaker)
4. [Deploy & Test fine-tuned Embedding Model on Amazon SageMaker](#4-deploy--test-fine-tuned-embedding-model-on-amazon-sagemaker)

**What is new with Sentence Transformers 3?**

Sentence Transformers v3 introduces a new trainer that makes it easier to fine-tune and train embedding models. This update includes enhanced components like diverse datasets, updated loss functions, and a streamlined training process, improving the efficiency and flexibility of model development.


**What is the Hugging Face Embedding Container?**

The Hugging Face Embedding Container is a new purpose-built Inference Container to easily deploy Embedding Models in a secure and managed environment. The DLC is powered by [Text Embedding Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) a blazing fast and memory efficient solution for deploying and serving Embedding Models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. TEI implements many features such as:

_Note: This blog was created and validated on `ml.g5.xlarge` for training and `ml.c6i.2xlarge` for inference instance._


## 1. Setup Development Environment

Our first step is to install Hugging Face Libraries we need on the client to correctly prepare our dataset and start our training/evaluations jobs. 

```python
!pip install transformers "datasets[s3]==2.18.0" "sagemaker>=2.190.0" "huggingface_hub[cli]" --upgrade --quiet
```

If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.



```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client('iam')
    role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Create and prepare the dataset

An embedding dataset typically consists of text pairs (question, answer/context) or triplets that represent relationships or similarities between sentences. The dataset format you choose or have available will also impact the loss function you can use. Common formats for embedding datasets:

- **Positive Pair**: Text Pairs of related sentences (query, context | query, answer), suitable for tasks like similarity or semantic search, example datasets: `sentence-transformers/sentence-compression`, `sentence-transformers/natural-questions`.
- **Triplets**: Text triplets consisting of (anchor, positive, negative), example datasets `sentence-transformers/quora-duplicates`, `nirantk/triplets`.
- **Pair with Similarity Score**: Sentence pairs with a similarity score indicating how related they are, example datasets: `sentence-transformers/stsb`, `PhilipMay/stsb_multi_mt`

Learn more at [Dataset Overview](https://sbert.net/docs/sentence_transformer/dataset_overview.html).

We are going to use [philschmid/finanical-rag-embedding-dataset](https://huggingface.co/datasets/philschmid/finanical-rag-embedding-dataset), which includes 7,000 positive text pairs of questions and corresponding context from the [2023_10 NVIDIA SEC Filing](https://stocklight.com/stocks/us/nasdaq-nvda/nvidia/annual-reports/nasdaq-nvda-2023-10K-23668751.pdf).

The dataset has the following format
```json
{"question": "<question>", "context": "<relevant context to answer>"}
{"question": "<question>", "context": "<relevant context to answer>"}
{"question": "<question>", "context": "<relevant context to answer>"}
```

We are going to use the [FileSystem integration](https://huggingface.co/docs/datasets/filesystems) to upload our dataset to S3. We are using the `sess.default_bucket()`, adjust this if you want to store the dataset in a different S3 bucket. We will use the S3 path later in our training script.

```python
from datasets import load_dataset

# Load dataset from the hub
dataset = load_dataset("philschmid/finanical-rag-embedding-dataset", split="train")
input_path = f's3://{sess.default_bucket()}/datasets/rag-embedding'

# rename columns
dataset = dataset.rename_column("question", "anchor")
dataset = dataset.rename_column("context", "positive")

# Add an id column to the dataset
dataset = dataset.add_column("id", range(len(dataset)))

# split dataset into a 10% test set
dataset = dataset.train_test_split(test_size=0.1)

# save train_dataset to s3 using our SageMaker session

# save datasets to s3
dataset["train"].to_json(f"{input_path}/train/dataset.json", orient="records")
train_dataset_s3_path = f"{input_path}/train/dataset.json"
dataset["test"].to_json(f"{input_path}/test/dataset.json", orient="records")
test_dataset_s3_path = f"{input_path}/test/dataset.json"

print(f"Training data uploaded to:")
print(train_dataset_s3_path)
print(test_dataset_s3_path)
print(f"https://s3.console.aws.amazon.com/s3/buckets/{sess.default_bucket()}/?region={sess.boto_region_name}&prefix={input_path.split('/', 3)[-1]}/")
```

## 3. Fine-tune Embedding model on Amazon SageMaker

We are now ready to fine-tune our model. We will use the [SentenceTransformerTrainer](https://www.sbert.net/docs/package_reference/sentence_transformer/trainer.html) from `sentence-transformers` to fine-tune our model. The `SentenceTransformerTrainer` makes it straightfoward to supervise fine-tune open Embedding Models, as it is a subclass of the `Trainer` from the `transformers`. We prepared a script [run_mnr.py](assets/run_mnr.py) which will loads the dataset from disk, prepare the model, tokenizer and start the training. 
The `SentenceTransformerTrainer` makes it straightfoward to supervise fine-tune open Embedding supporting:
- **Integrated Components**: Combines datasets, loss functions, and evaluators into a unified training framework.
- **Flexible Data Handling**: Supports various data formats and easy integration with Hugging Face datasets.
- **Versatile Loss Functions**: Offers multiple loss functions for different training tasks.
- **Multi-Dataset Training**: Facilitates simultaneous training with multiple datasets and different loss functions.
- **Seamless Integration**: Easy saving, loading, and sharing of models within the Hugging Face ecosystem.

In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. The Estimator manages the infrastructure use. Amazon SagMaker takes care of starting and managing all the required ec2 instances for us, provides the correct huggingface container, uploads the provided scripts and downloads the data from our S3 bucket into the container at `/opt/ml/input/data`. Then, it starts the training job by running.

Note: Make sure that you include the `requirements.txt` in the `source_dir` if you are using a custom training script. We recommend to just clone the whole repository.

Lets first define our trainings parameter. Those are passed as cli arguments to our training script. We are going to use the `BAAI/bge-base-en-v1.5` model, which is a pre-trained model on a large corpus of English text. We will use the `MultipleNegativesRankingLoss` in combination with the `MatryoshkaLoss`. This approach allows us to leverage the efficiency and flexibility of Matryoshka embeddings, enabling different embedding dimensions to be utilized without significant performance trade-offs. The `MultipleNegativesRankingLoss` is a great loss function if you only have positive pairs as it adds in batch negative samples to the loss function to have per sample n-1 negative samples.

```python
from sagemaker.huggingface import HuggingFace

# define Training Job Name 
job_name = f'bge-base-exp1'

# define hyperparameters, which are passed into the training job
training_arguments = {
  "model_id": "BAAI/bge-base-en-v1.5", # model id from the hub
  "train_dataset_path": "/opt/ml/input/data/train/", # path inside the container where the training data is stored
  "test_dataset_path": "/opt/ml/input/data/test/", # path inside the container where the test data is stored
  "num_train_epochs": 3, # number of training epochs
  "learning_rate": 2e-5, # learning rate
}

# create the Estimator
huggingface_estimator = HuggingFace(
    entry_point          = 'run_mnr.py',      # train script
    source_dir           = 'scripts',         # directory which includes all the files needed for training
    instance_type        = 'ml.g5.xlarge',    # instances type used for the training job
    instance_count       = 1,                 # the number of instances used for training
    max_run              = 2*24*60*60,        # maximum runtime in seconds (days * hours * minutes * seconds)
    base_job_name        = job_name,          # the name of the training job
    role                 = role,              # Iam role used in training job to access AWS ressources, e.g. S3
    transformers_version = '4.36.0',          # the transformers version used in the training job
    pytorch_version      = '2.1.0',           # the pytorch_version version used in the training job
    py_version           = 'py310',           # the python version used in the training job
    hyperparameters      =  training_arguments,
    disable_output_compression = True,        # not compress output to save training time and cost
    environment  = {
        "HUGGINGFACE_HUB_CACHE": "/tmp/.cache", # set env variable to cache models in /tmp
    }, 
)
```

We can now start our training job, with the `.fit()` method passing our S3 path to the training script.

```python
# define a data input dictonary with our uploaded s3 uris
data = {
  'train': train_dataset_s3_path,
  'test': test_dataset_s3_path,
  }

# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data, wait=True)
```

In our example the training BGE Base with Flash Attention 2 (SDPA) for 3 epochs with a dataset of 6,3k train samples and 700 eval samples took 645 seconds (~10minutes) on a `ml.g5.xlarge` (1.2575 $/h) or ~$5.

## 4. Deploy & Test fine-tuned Embedding Model on Amazon SageMaker

We are going to use the [Hugging Face Embedding Container](https://huggingface.co/blog/sagemaker-huggingface-embedding#what-is-the-hugging-face-embedding-container) a purpose-built Inference  Container to easily deploy Embedding Models in a secure and managed environment. The DLC is powered by Text Embedding Inference (TEI) a blazing fast and memory efficient solution for deploying and serving Embedding Models.

To retrieve the new Hugging Face Embedding Container in Amazon SageMaker, we can use the `get_huggingface_llm_image_uri` method provided by the sagemaker SDK. This method allows us to retrieve the URI for the desired Hugging Face Embedding Container. Important to note is that TEI has 2 different versions for cpu and gpu, so we create a helper function to retrieve the correct image uri based on the instance type.

```python
from sagemaker.huggingface import get_huggingface_llm_image_uri

# retrieve the image uri based on instance type
def get_image_uri(instance_type):
  key = "huggingface-tei" if instance_type.startswith("ml.g") or instance_type.startswith("ml.p") else "huggingface-tei-cpu"
  return get_huggingface_llm_image_uri(key, version="1.4.0")
```

We can now create a `HuggingFaceModel` using the container uri and the S3 path to our model. We also need to set our TEI configuration.

```python
from sagemaker.huggingface import HuggingFaceModel

# sagemaker config
instance_type = "ml.c6i.2xlarge"

# create HuggingFaceModel with the image uri
emb_model = HuggingFaceModel(
  role=role,
  image_uri=get_image_uri(instance_type),
  model_data=huggingface_estimator.model_data,
  env={'HF_MODEL_ID': "/opt/ml/model"}     # Path to the model in the container
)
```

After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the deploy method. We will deploy the model with the `ml.c6i.2xlarge` instance type.

```python
# Deploy model to an endpoint
emb = emb_model.deploy(
  initial_instance_count=1,
  instance_type=instance_type,
)
```

SageMaker will now create our endpoint and deploy the model to it. This can take ~5 minutes. After our endpoint is deployed we can run inference on it. We will use the `predict` method from the predictor to run inference on our endpoint.

```python
data = {
  "inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
 
res = emb.predict(data=data)
 
 
# print some results
print(f"length of embeddings: {len(res[0])}")
print(f"first 10 elements of embeddings: {res[0][:10]}")
```

We trained our model with the Matryoshka Loss means that the semantic meaning is frontloaded. To use the different mathryshoka dimension we need to manually truncate our embeddings manually. Below is an example on how you would truncate the embeddings to 256 dimension, which is 1/3 of the original size. If we check our training logs we can see that the NDCG metric for 768 is `0.823` and for 256 `0.818` meaning we preserve > 99% accuracy. 

```python
data = {
  "inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .",
}
 
res = emb.predict(data=data)
 
# truncate embeddings to matryoshka dimensions
dim = 256
res = res[0][0:dim]
 
# print some results
print(f"length of embeddings: {len(res)}")
```

Awesome! 🚀 Now that we can generate embeddings and integrate your endpoint into your RAG application. 

To clean up, we can delete the model and endpoint.

```python
emb.delete_model()
emb.delete_endpoint()
```

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/hub-docs/tree/main/notebooks/sagemaker-sdk/fine-tune-embedding-models/sagemaker-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/examples/sagemaker-sdk-fine-tune-embedding-models.mdx" />

### Deploy Llama 3.3 70B on AWS Inferentia2
https://huggingface.co/docs/sagemaker/examples/sagemaker-sdk-deploy-llama-3-3-70b-inferentia2.md

# Deploy Llama 3.3 70B on AWS Inferentia2

In this tutorial you will learn how to deploy [/meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) model on AWS Inferentia2 with Hugging Face Optimum on Amazon SageMaker. We are going to use the Hugging Face TGI Neuron Container, a purpose-built Inference Container to easily deploy LLMs on AWS Inferentia2 powered by[ Text Generation Inference](https://huggingface.co/docs/text-generation-inference/index) and [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index).


We will cover how to:
1. [Setup development environment](#1-setup-development-environment)
2. [Retrieve the new Hugging Face TGI Neuron DLC](#2-retrieve-the-new-hugging-face-tgi-neuron-dlc)
3. [Deploy Llama 3.3 70B to inferentia2](#3-deploy-llama-33-70b-to-inferentia2)
4. [Clean up](#4-clean-up)

Lets get started! 🚀

[AWS inferentia (Inf2)](https://aws.amazon.com/ec2/instance-types/inf2/) are purpose-built EC2 for deep learning (DL) inference workloads. Here are the different instances of the Inferentia2 family.

| instance size | accelerators | Neuron Cores | accelerator memory | vCPU | CPU Memory | on-demand price ($/h) |
| ------------- | ------------ | ------------ | ------------------ | ---- | ---------- | --------------------- |
| inf2.xlarge   | 1            | 2            | 32                 | 4    | 16         | 0.76                  |
| inf2.8xlarge  | 1            | 2            | 32                 | 32   | 128        | 1.97                  |
| inf2.24xlarge | 6            | 12           | 192                | 96   | 384        | 6.49                  |
| inf2.48xlarge | 12           | 24           | 384                | 192  | 768        | 12.98                 |


## 1. Setup development environment

For this tutorial, we are going to use a Notebook Instance in Amazon SageMaker with the Python 3 (ipykernel) and the `sagemaker` python SDK to deploy Llama 3.3 70B to a SageMaker inference endpoint.

Make sur you have the latest version of the SageMaker SDK installed.

```python
!pip install sagemaker --upgrade --quiet
```

Then, instantiate the sagemaker role and session.

```python
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client('iam')
    role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Retrieve the latest Hugging Face TGI Neuron DLC

The latest Hugging Face TGI Neuron DLCs can be used to run inference on AWS Inferentia2. You can use the `get_huggingface_llm_image_uri` method of the `sagemaker` SDK to retrieve the appropriate Hugging Face TGI Neuron DLC URI based on your desired `backend`, `session`, `region`, and `version`. You can find the latest version of the container [here](https://huggingface.co/docs/optimum-neuron/containers), if not yet added to the SageMaker SDK.

At the time of the tutorial, the latest version of the container is not yet added to the Sagemaker SDK so we will not use `get_huggingface_llm_image_uri`. 

```python
# pulled from https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/image_uri_config/huggingface-llm-neuronx.json
account_id_dict= {
    "ap-northeast-1": "763104351884",
    "ap-south-1": "763104351884",
    "ap-south-2": "772153158452",
    "ap-southeast-1": "763104351884",
    "ap-southeast-2": "763104351884",
    "ap-southeast-4": "457447274322",
    "ap-southeast-5": "550225433462",
    "ap-southeast-7": "590183813437",
    "cn-north-1": "727897471807",
    "cn-northwest-1": "727897471807",
    "eu-central-1": "763104351884",
    "eu-central-2": "380420809688",
    "eu-south-2": "503227376785",
    "eu-west-1": "763104351884",
    "eu-west-3": "763104351884",
    "il-central-1": "780543022126",
    "mx-central-1":"637423239942",
    "sa-east-1": "763104351884",
    "us-east-1": "763104351884",
    "us-east-2": "763104351884",
    "us-gov-east-1": "446045086412",
    "us-gov-west-1": "442386744353",
    "us-west-2": "763104351884",
    "ca-west-1": "204538143572"
}

region = boto3.Session().region_name
llm_image = f"{account_id_dict[region]}.dkr.ecr.{region}.amazonaws.com/huggingface-pytorch-tgi-inference:2.1.2-optimum0.0.28-neuronx-py310-ubuntu22.04"
```

## 3. Deploy Llama 3.3 70B to Inferentia2

At the time of writing, [AWS Inferentia2 does not support dynamic shapes for inference](https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/arch/neuron-features/dynamic-shapes.html#neuron-dynamic-shapes), which means that we need to specify our sequence length and batch size ahead of time.
To make it easier for customers to utilize the full power of Inferentia2, we created a [neuron model cache](https://huggingface.co/docs/optimum-neuron/guides/cache_system), which contains pre-compiled configurations for the most popular LLMs, including Llama 3.3 70B. 

This means we don't need to compile the model ourselves, but we can use the pre-compiled model from the cache. You can find compiled/cached configurations on the [Hugging Face Hub](https://huggingface.co/aws-neuron/optimum-neuron-cache/tree/main/inference-cache-config). If your desired configuration is not yet cached, you can compile it yourself using the [Optimum CLI](https://huggingface.co/docs/optimum-neuron/guides/export_model) or open a request at the [Cache repository](https://huggingface.co/aws-neuron/optimum-neuron-cache/discussions).

**Deploying Llama 3.3 70B to a SageMaker Endpoint**  

Before deploying the model to Amazon SageMaker, we must define the TGI Neuron endpoint configuration. We need to make sure the following additional parameters are defined: 

- `HF_NUM_CORES`: Number of Neuron Cores used for the compilation.
- `HF_BATCH_SIZE`: The batch size that was used to compile the model.
- `HF_SEQUENCE_LENGTH`: The sequence length that was used to compile the model.
- `HF_AUTO_CAST_TYPE`: The auto cast type that was used to compile the model.

We still need to define traditional TGI parameters with:

- `HF_MODEL_ID`: The Hugging Face model ID.
- `HF_TOKEN`: The Hugging Face API token to access gated models.
- `MAX_BATCH_SIZE`: The maximum batch size that the model can handle, equal to the batch size used for compilation.
- `MAX_INPUT_TOKEN`: The maximum input length that the model can handle. 
- `MAX_TOTAL_TOKENS`: The maximum total tokens the model can generate, equal to the sequence length used for compilation.

Optionnaly, you can configure the endpoint to support chat templates:
- `MESSAGES_API_ENABLED`: Enable Messages API 

**Select the right instance type**

Llama 3.3 70B is a large model and requires a lot of memory. We are going to use the `inf2.48xlarge` instance type, which has 192 vCPUs and 384 GB of accelerator memory. The `inf2.48xlarge` instance comes with 12 Inferentia2 accelerators that include 24 Neuron Cores. If you want to find the cached configurations for Llama 3.3 70B, you can find them [here](https://huggingface.co/aws-neuron/optimum-neuron-cache/blob/main/inference-cache-config/llama3-70b.json#L16). In our case we will use a batch size of 4 and a sequence length of 4096. 


Before we can deploy Llama 3.3 70B to Inferentia2, we need to make sure we have the necessary permissions to access the model. You can request access to the model [here](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and create a User access token following this [guide](https://huggingface.co/docs/hub/en/security-tokens).


After that we can create our endpoint configuration and deploy the model to Amazon SageMaker. We will deploy the endpoint with the Messages API enabled, so that it is fully compatible with the OpenAI Chat Completion API.

```python
from sagemaker.huggingface import HuggingFaceModel

# sagemaker config
instance_type = "ml.inf2.48xlarge"
health_check_timeout=3600 # additional time to load the model
volume_size=512 # size in GB of the EBS volume

# Define Model and Endpoint configuration parameter
config = {
    "HF_MODEL_ID": "meta-llama/Meta-Llama-3-70B-Instruct",
    "HF_NUM_CORES": "24", # number of neuron cores
    "HF_AUTO_CAST_TYPE": "bf16",  # dtype of the model
    "MAX_BATCH_SIZE": "4", # max batch size for the model
    "MAX_INPUT_TOKENS": "4000", # max length of input text
    "MAX_TOTAL_TOKENS": "4096", # max length of generated text
    "MESSAGES_API_ENABLED": "true", # Enable the messages API
    "HF_TOKEN": "<REPLACE WITH YOUR TOKEN>",
}

assert config["HF_TOKEN"] != "<REPLACE WITH YOUR TOKEN>", "Please replace '<REPLACE WITH YOUR TOKEN>' with your Hugging Face Hub API token"


# create HuggingFaceModel with the image uri
llm_model = HuggingFaceModel(
  role=role,
  image_uri=llm_image,
  env=config
)
```

After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.inf2.48xlarge` instance type. TGI will automatically distribute and shard the model across all Inferentia devices.

```python
# deactivate warning since model is compiled
llm_model._is_compiled_model = True

llm = llm_model.deploy(
  initial_instance_count=1,
  instance_type=instance_type,
  container_startup_health_check_timeout=health_check_timeout,
  volume_size=volume_size
)
```

SageMaker will now create our endpoint and deploy the model to it. It takes around 30 minutes for deployment.

After our endpoint is deployed we can run inference on it. We will use the `predict` method from the `predictor` to run inference on our endpoint. 

The endpoint supports the Messages API, which is fully compatible with the OpenAI Chat Completion API. The Messages API allows us to interact with the model in a conversational way. We can define the role of the message and the content. The role can be either `system`,`assistant` or `user`. The `system` role is used to provide context to the model and the `user` role is used to ask questions or provide input to the model.

Parameters can be defined as in the `parameters` attribute of the payload. Check out the chat completion [documentation](https://platform.openai.com/docs/api-reference/chat/create) to find supported parameters.

```json
{
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "What is deep learning?" }
  ]
}
```

```python
# Prompt to generate
messages=[
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "What is deep learning in one sentence?" }
]

# Generation arguments https://platform.openai.com/docs/api-reference/chat/create
parameters = {
    "max_tokens":100,
}
```

Okay lets test it.

```python
chat = llm.predict({"messages" :messages, **parameters,"steam":True})

print(chat["choices"][0]["message"]["content"].strip())
```

## 4. Clean up

To clean up, we can delete the model and endpoint.

```python
llm.delete_model()
llm.delete_endpoint()
```

---
<Tip>

📍 Find the complete example on GitHub [here](https://github.com/huggingface/hub-docs/tree/main/notebooks/sagemaker-sdk/deploy-llama-3-3-70b-inferentia2/sagemaker-notebook.ipynb)!

</Tip>

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/examples/sagemaker-sdk-deploy-llama-3-3-70b-inferentia2.mdx" />

### How to
https://huggingface.co/docs/sagemaker/tutorials/index.md

# How to

Take a look at our tutorials about using Hugging Face models on AWS.

## Sagemaker SDK

- [Sagemaker SDK Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart)
- [Train models with Sagemaker SDK](https://huggingface.co/docs/sagemaker/main/en/tutorials/sagemaker-sdk/training-sagemaker-sdk)
- [Deploy models with Sagemaker SDK](https://huggingface.co/docs/sagemaker/main/en/tutorials/sagemaker-sdk/deploy-sagemaker-sdk)

## Jumpstart

- [Sagemaker Jumpstart Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/jumpstart/jumpstart-quickstart)

## Bedrock

- [AWS Bedrock Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/bedrock/bedrock-quickstart)

## EC2, ECS and EKS

- [EC2, ECS and EKS Quickstart](https://huggingface.co/docs/sagemaker/main/en/tutorials/compute-services/compute-services-quickstart)


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/index.md" />

### Train and deploy a Hugging Face model on Amazon SageMaker with the SDK
https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart.md

# Train and deploy a Hugging Face model on Amazon SageMaker with the SDK

The get started guide will show you how to quickly use Hugging Face on Amazon SageMaker with the SDK. Learn how to fine-tune and deploy a pretrained 🤗 Transformers model on SageMaker for a binary text classification task.

<iframe width="560" height="315" src="https://www.youtube.com/embed/pYqjCzoyWyo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

📓 Open the [sagemaker-notebook.ipynb file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) to follow along!

## Installation and setup

Get started by installing the necessary Hugging Face libraries and SageMaker. You will also need to install [PyTorch](https://pytorch.org/get-started/locally/) if you don't already have it installed. If you run this example in SageMaker Studio, it is already installed in the notebook kernel!

```python
pip install "sagemaker>=2.140.0" "transformers==4.26.1" "datasets[s3]==2.10.1" --upgrade
```

If you want to run this example in [SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/studio.html), upgrade [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) for the 🤗 Datasets library and restart the kernel:

```python
%%capture
import IPython
!conda install -c conda-forge ipywidgets -y
IPython.Application.instance().kernel.do_shutdown(True)
```

Next, you should set up your environment: a SageMaker session and an S3 bucket. The S3 bucket will store data, models, and logs. You will need access to an [IAM execution role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) with the required permissions.

If you are planning on using SageMaker in a local environment, you need to provide the `role` yourself. Learn more about how to set this up [here](https://huggingface.co/docs/sagemaker/train#installation-and-setup).

⚠️ The execution role is only available when you run a notebook within SageMaker. If you try to run `get_execution_role` in a notebook not on SageMaker, you will get a region error.

```python
import sagemaker

sess = sagemaker.Session()
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
    sagemaker_session_bucket = sess.default_bucket()

role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
```

## Preprocess

The 🤗 Datasets library makes it easy to download and preprocess a dataset for training. Download and tokenize the [IMDb](https://huggingface.co/datasets/imdb) dataset:

```python
from datasets import load_dataset
from transformers import AutoTokenizer

# load dataset
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])

# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")

# create tokenization function
def tokenize(batch):
    return tokenizer(batch["text"], padding="max_length", truncation=True)

# tokenize train and test datasets
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)

# set dataset format for PyTorch
train_dataset =  train_dataset.rename_column("label", "labels")
train_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])
```

## Upload dataset to S3 bucket

Next, upload the preprocessed dataset to your S3 session bucket with 🤗 Datasets S3 [filesystem](https://huggingface.co/docs/datasets/filesystems.html) implementation:

```python
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path)

# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path)
```

## Start a training job

Create a Hugging Face Estimator to handle end-to-end SageMaker training and deployment. The most important parameters to pay attention to are:

* `entry_point` refers to the fine-tuning script which you can find in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).
* `instance_type` refers to the SageMaker instance that will be launched. Take a look [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types.
* `hyperparameters` refers to the training hyperparameters the model will be fine-tuned with.

```python
from sagemaker.huggingface import HuggingFace

hyperparameters={
    "epochs": 1,                                       # number of training epochs
    "train_batch_size": 32,                            # training batch size
    "model_name":"distilbert/distilbert-base-uncased"  # name of pretrained model
}

huggingface_estimator = HuggingFace(
    entry_point="train.py",                 # fine-tuning script to use in training job
    source_dir="./scripts",                 # directory where fine-tuning script is stored
    instance_type="ml.p3.2xlarge",          # instance type
    instance_count=1,                       # number of instances
    role=role,                              # IAM role used in training job to access AWS resources (S3)
    transformers_version="4.36",             # Transformers version
    pytorch_version="2.1.0",                  # PyTorch version
    py_version="py310",                      # Python version
    hyperparameters=hyperparameters         # hyperparameters to use in training job
)
```

Begin training with one line of code:

```python
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})
```

## Deploy model

Once the training job is complete, deploy your fine-tuned model by calling `deploy()` with the number of instances and instance type:

```python
predictor = huggingface_estimator.deploy(initial_instance_count=1,"ml.g4dn.xlarge")
```

Call `predict()` on your data:

```python
sentiment_input = {"inputs": "It feels like a curtain closing...there was an elegance in the way they moved toward conclusion. No fan is going to watch and feel short-changed."}

predictor.predict(sentiment_input)
```

After running your request, delete the endpoint:

```python
predictor.delete_endpoint()
```

## What's next?

Congratulations, you've just fine-tuned and deployed a pretrained 🤗 Transformers model on SageMaker for binary text classification! 🎉


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/sagemaker-sdk/sagemaker-sdk-quickstart.md" />

### Deploy models to Amazon SageMaker
https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/deploy-sagemaker-sdk.md

# Deploy models to Amazon SageMaker

Deploying a 🤗 Transformers models in SageMaker for inference is as easy as:

```python
from sagemaker.huggingface import HuggingFaceModel

# create Hugging Face Model Class and deploy it as SageMaker endpoint
huggingface_model = HuggingFaceModel(...).deploy()
```

This guide will show you how to deploy models with zero-code using the [Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit). The Inference Toolkit builds on top of the [`pipeline` feature](https://huggingface.co/docs/transformers/main_classes/pipelines) from 🤗 Transformers. Learn how to:

- [Install and setup the Inference Toolkit](#installation-and-setup).
- [Deploy a 🤗 Transformers model trained in SageMaker](#deploy-a-transformer-model-trained-in-sagemaker).
- [Deploy a 🤗 Transformers model from the Hugging Face [model Hub](https://huggingface.co/models)](#deploy-a-model-from-the-hub).
- [Run a Batch Transform Job using 🤗 Transformers and Amazon SageMaker](#run-batch-transform-with-transformers-and-sagemaker).
- [Create a custom inference module](#user-defined-code-and-modules).

## Installation and setup

Before deploying a 🤗 Transformers model to SageMaker, you need to sign up for an AWS account. If you don't have an AWS account yet, learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).

Once you have an AWS account, get started using one of the following:

- [SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html)
- [SageMaker notebook instance](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html)
- Local environment

To start training locally, you need to setup an appropriate [IAM role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).

Upgrade to the latest `sagemaker` version.

```bash
pip install sagemaker --upgrade
```

**SageMaker environment**

Setup your SageMaker environment as shown below:

```python
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
```

_Note: The execution role is only available when running a notebook within SageMaker. If you run `get_execution_role` in a notebook not on SageMaker, expect a `region` error._

**Local environment**

Setup your local environment as shown below:

```python
import sagemaker
import boto3

iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='role-name-of-your-iam-role-with-right-permissions')['Role']['Arn']
sess = sagemaker.Session()
```

## Deploy a 🤗 Transformers model trained in SageMaker

<iframe width="700" height="394" src="https://www.youtube.com/embed/pfBGgSGnYLs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

There are two ways to deploy your Hugging Face model trained in SageMaker:

- Deploy it after your training has finished. 
- Deploy your saved model at a later time from S3 with the `model_data`.

📓 Open the [deploy_transformer_model_from_s3.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) for an example of how to deploy a model from S3 to SageMaker for inference.

### Deploy after training

To deploy your model directly after training, ensure all required files are saved in your training script, including the tokenizer and the model.

If you use the Hugging Face `Trainer`, you can pass your tokenizer as an argument to the `Trainer`. It will be automatically saved when you call `trainer.save_model()`.

```python
from sagemaker.huggingface import HuggingFace

############ pseudo code start ############

# create Hugging Face Estimator for training
huggingface_estimator = HuggingFace(....)

# start the train job with our uploaded datasets as input
huggingface_estimator.fit(...)

############ pseudo code end ############

# deploy model to SageMaker Inference
predictor = hf_estimator.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")

# example request: you always need to define "inputs"
data = {
   "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 from landline. Delivery within 28 days."
}

# request
predictor.predict(data)
```

After you run your request you can delete the endpoint as shown:

```python
# delete endpoint
predictor.delete_endpoint()
```

### Deploy with `model_data`

If you've already trained your model and want to deploy it at a later time, use the `model_data` argument to specify the location of your tokenizer and model weights.

```python
from sagemaker.huggingface.model import HuggingFaceModel

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
   model_data="s3://models/my-bert-model/model.tar.gz",  # path to your trained SageMaker model
   role=role,                                            # IAM role with permissions to create an endpoint
   transformers_version="4.26",                           # Transformers version used
   pytorch_version="1.13",                                # PyTorch version used
   py_version='py39',                                    # Python version used
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
   initial_instance_count=1,
   instance_type="ml.m5.xlarge"
)

# example request: you always need to define "inputs"
data = {
   "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 from landline. Delivery within 28 days."
}

# request
predictor.predict(data)
```

After you run our request, you can delete the endpoint again with:

```python
# delete endpoint
predictor.delete_endpoint()
```

### Create a model artifact for deployment

For later deployment, you can create a `model.tar.gz` file that contains all the required files, such as:

- `pytorch_model.bin`
- `tf_model.h5`
- `tokenizer.json`
- `tokenizer_config.json`

For example, your file should look like this:

```bash
model.tar.gz/
|- pytorch_model.bin
|- vocab.txt
|- tokenizer_config.json
|- config.json
|- special_tokens_map.json
```

Create your own `model.tar.gz` from a model from the 🤗 Hub:

1. Download a model:

```bash
git lfs install
git clone git@hf.co:{repository}
```

2. Create a `tar` file:

```bash
cd {repository}
tar zcvf model.tar.gz *
```

3. Upload `model.tar.gz` to S3:

```bash
aws s3 cp model.tar.gz <s3://{my-s3-path}>
```

Now you can provide the S3 URI to the `model_data` argument to deploy your model later.

## Deploy a model from the 🤗 Hub

<iframe width="700" height="394" src="https://www.youtube.com/embed/l9QZuazbzWM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

To deploy a model directly from the 🤗 Hub to SageMaker, define two environment variables when you create a `HuggingFaceModel`:

- `HF_MODEL_ID` defines the model ID which is automatically loaded from [huggingface.co/models](http://huggingface.co/models) when you create a SageMaker endpoint. Access 10,000+ models on he 🤗 Hub through this environment variable.
- `HF_TASK` defines the task for the 🤗 Transformers `pipeline`. A complete list of tasks can be found [here](https://huggingface.co/docs/transformers/main_classes/pipelines).

> ⚠️ ** Pipelines are not optimized for parallelism (multi-threading) and tend to consume a lot of RAM. For example, on a GPU-based instance, the pipeline operates on a single vCPU. When this vCPU becomes saturated with the inference requests preprocessing, it can create a bottleneck, preventing the GPU from being fully utilized for model inference. Learn more [here](https://huggingface.co/docs/transformers/en/pipeline_webserver#using-pipelines-for-a-webserver)

```python
from sagemaker.huggingface.model import HuggingFaceModel

# Hub model configuration <https://huggingface.co/models>
hub = {
  'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models
  'HF_TASK':'question-answering'                           # NLP task you want to use for predictions
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
   env=hub,                                                # configuration for loading model from Hub
   role=role,                                              # IAM role with permissions to create an endpoint
   transformers_version="4.26",                             # Transformers version used
   pytorch_version="1.13",                                  # PyTorch version used
   py_version='py39',                                      # Python version used
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
   initial_instance_count=1,
   instance_type="ml.m5.xlarge"
)

# example request: you always need to define "inputs"
data = {
"inputs": {
	"question": "What is used for inference?",
	"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
	}
}

# request
predictor.predict(data)
```

After you run our request, you can delete the endpoint again with:

```python
# delete endpoint
predictor.delete_endpoint()
```

📓 Open the [deploy_transformer_model_from_hf_hub.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) for an example of how to deploy a model from the 🤗 Hub to SageMaker for inference.

## Run batch transform with 🤗 Transformers and SageMaker

<iframe width="700" height="394" src="https://www.youtube.com/embed/lnTixz0tUBg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

After training a model, you can use [SageMaker batch transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) to perform inference with the model. Batch transform accepts your inference data as an S3 URI  and then SageMaker will take care of downloading the data, running the prediction, and uploading the results to S3. For more details about batch transform, take a look [here](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html).

⚠️ The Hugging Face Inference DLC currently only supports `.jsonl` for batch transform due to the complex structure of textual data.

_Note: Make sure your `inputs` fit the `max_length` of the model during preprocessing._

If you trained a model using the Hugging Face Estimator, call the `transformer()` method to create a transform job for a model based on the training job (see [here](https://sagemaker.readthedocs.io/en/stable/overview.html#sagemaker-batch-transform) for more details):

```python
batch_job = huggingface_estimator.transformer(
    instance_count=1,
    instance_type='ml.p3.2xlarge',
    strategy='SingleRecord')


batch_job.transform(
    data='s3://s3-uri-to-batch-data',
    content_type='application/json',    
    split_type='Line')
```

If you want to run your batch transform job later or with a model from the 🤗 Hub, create a `HuggingFaceModel` instance and then call the `transformer()` method:

```python
from sagemaker.huggingface.model import HuggingFaceModel

# Hub model configuration <https://huggingface.co/models>
hub = {
	'HF_MODEL_ID':'distilbert/distilbert-base-uncased-finetuned-sst-2-english',
	'HF_TASK':'text-classification'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
   env=hub,                                                # configuration for loading model from Hub
   role=role,                                              # IAM role with permissions to create an endpoint
   transformers_version="4.26",                             # Transformers version used
   pytorch_version="1.13",                                  # PyTorch version used
   py_version='py39',                                      # Python version used
)

# create transformer to run a batch job
batch_job = huggingface_model.transformer(
    instance_count=1,
    instance_type='ml.p3.2xlarge',
    strategy='SingleRecord'
)

# starts batch transform job and uses S3 data as input
batch_job.transform(
    data='s3://sagemaker-s3-demo-test/samples/input.jsonl',
    content_type='application/json',    
    split_type='Line'
)
```

The `input.jsonl` looks like this:

```jsonl
{"inputs":"this movie is terrible"}
{"inputs":"this movie is amazing"}
{"inputs":"SageMaker is pretty cool"}
{"inputs":"SageMaker is pretty cool"}
{"inputs":"this movie is terrible"}
{"inputs":"this movie is amazing"}
```

📓 Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb) for an example of how to run a batch transform job for inference.

## Deploy an LLM to SageMaker using TGI

If you are interested in using a high-performance serving container for LLMs, you can use the Hugging Face TGI container. This utilizes the [Text Generation Inference](https://github.com/huggingface/text-generation-inference) library. A list of compatible models can be found [here](https://huggingface.co/docs/text-generation-inference/supported_models#supported-models).

First, make sure that the latest version of SageMaker SDK is installed:

```bash
pip install sagemaker>=2.231.0
```

Then, we import the SageMaker Python SDK and instantiate a sagemaker_session to find the current region and execution role.

```python
import sagemaker
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
import time

sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()
```

Next we retrieve the LLM image URI. We use the helper function get_huggingface_llm_image_uri() to generate the appropriate image URI for the Hugging Face Large Language Model (LLM) inference. The function takes a required parameter backend and several optional parameters. The backend specifies the type of backend to use for the model:  “huggingface” refers to using Hugging Face TGI backend.

```python
image_uri = get_huggingface_llm_image_uri(
  backend="huggingface",
  region=region
)
```

Now that we have the image uri, the next step is to configure the model object. We specify a unique name, the image_uri for the managed TGI container, and the execution role for the endpoint. Additionally, we specify a number of environment variables including the `HF_MODEL_ID` which corresponds to the model from the HuggingFace Hub that will be deployed, and the `HF_TASK` which configures the inference task to be performed by the model.

You should also define `SM_NUM_GPUS`, which specifies the tensor parallelism degree of the model. Tensor parallelism can be used to split the model across multiple GPUs, which is necessary when working with LLMs that are too big for a single GPU. To learn more about tensor parallelism with inference, see our previous blog post. Here, you should set `SM_NUM_GPUS` to the number of available GPUs on your selected instance type. For example, in this tutorial, we set `SM_NUM_GPUS` to 4 because our selected instance type ml.g4dn.12xlarge has 4 available GPUs.

Note that you can optionally reduce the memory and computational footprint of the model by setting the `HF_MODEL_QUANTIZE` environment variable to `true`, but this lower weight precision could affect the quality of the output for some models.

```python
model_name = "llama-3-1-8b-instruct" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

hub = {
    'HF_MODEL_ID':'meta-llama/Llama-3.1-8B-Instruct',
    'SM_NUM_GPUS':'1',
	'HUGGING_FACE_HUB_TOKEN': '<REPLACE WITH YOUR TOKEN>',
}

assert hub['HUGGING_FACE_HUB_TOKEN'] != '<REPLACE WITH YOUR TOKEN>', "You have to provide a token."


model = HuggingFaceModel(
    name=model_name,
    env=hub,
    role=role,
    image_uri=image_uri
)
```

Next, we invoke the deploy method to deploy the model.

```python
predictor = model.deploy(
  initial_instance_count=1,
  instance_type="ml.g5.2xlarge",
  endpoint_name=model_name
)
```

Once the model is deployed, we can invoke it to generate text. We pass an input prompt and run the predict method to generate a text response from the LLM running in the TGI container.

```python
input_data = {
  "inputs": "The diamondback terrapin was the first reptile to",
  "parameters": {
    "do_sample": True,
    "max_new_tokens": 100,
    "temperature": 0.7,
    "watermark": True
  }
}

predictor.predict(input_data)
```

We receive the following auto-generated text response:
```python
[{'generated_text': 'The diamondback terrapin was the first reptile to make the list, followed by the American alligator, the American crocodile, and the American box turtle. The polecat, a ferret-like animal, and the skunk rounded out the list, both having gained their slots because they have proven to be particularly dangerous to humans.\n\nCalifornians also seemed to appreciate the new list, judging by the comments left after the election.\n\n“This is fantastic,” one commenter declared.\n\n“California is a very'}]
```

Once we are done experimenting, we delete the endpoint and the model resources.

```python
predictor.delete_model()
predictor.delete_endpoint()
```

## User defined code and modules

The Hugging Face Inference Toolkit allows the user to override the default methods of the `HuggingFaceHandlerService`. You will need to create a folder named `code/` with an `inference.py` file in it. See [here](#create-a-model-artifact-for-deployment) for more details on how to archive your model artifacts. For example:  

```bash
model.tar.gz/
|- pytorch_model.bin
|- ....
|- code/
  |- inference.py
  |- requirements.txt 
```

The `inference.py` file contains your custom inference module, and the `requirements.txt` file contains additional dependencies that should be added. The custom module can override the following methods:  

* `model_fn(model_dir)` overrides the default method for loading a model. The return value `model` will be used in `predict` for predictions. `predict` receives argument the `model_dir`, the path to your unzipped `model.tar.gz`.
* `transform_fn(model, data, content_type, accept_type)` overrides the default transform function with your custom implementation. You will need to implement your own `preprocess`, `predict` and `postprocess` steps in the `transform_fn`. This method can't be combined with `input_fn`, `predict_fn` or `output_fn` mentioned below.
* `input_fn(input_data, content_type)` overrides the default method for preprocessing. The return value `data` will be used in `predict` for predictions. The inputs are:
  - `input_data` is the raw body of your request.
  - `content_type` is the content type from the request header.
* `predict_fn(processed_data, model)` overrides the default method for predictions. The return value `predictions` will be used in `postprocess`. The input is `processed_data`, the result from `preprocess`.
* `output_fn(prediction, accept)` overrides the default method for postprocessing. The return value `result` will be the response of your request (e.g.`JSON`). The inputs are:
  - `predictions` is the result from `predict`.
  - `accept` is the return accept type from the HTTP Request, e.g. `application/json`.

Here is an example of a custom inference module with `model_fn`, `input_fn`, `predict_fn`, and `output_fn`:  

```python
from sagemaker_huggingface_inference_toolkit import decoder_encoder

def model_fn(model_dir):
    # implement custom code to load the model
    loaded_model = ...
    
    return loaded_model 

def input_fn(input_data, content_type):
    # decode the input data  (e.g. JSON string -> dict)
    data = decoder_encoder.decode(input_data, content_type)
    return data

def predict_fn(data, model):
    # call your custom model with the data
    outputs = model(data , ... )
    return predictions

def output_fn(prediction, accept):
    # convert the model output to the desired output format (e.g. dict -> JSON string)
    response = decoder_encoder.encode(prediction, accept)
    return response
```

Customize your inference module with only `model_fn` and `transform_fn`:   

```python
from sagemaker_huggingface_inference_toolkit import decoder_encoder

def model_fn(model_dir):
    # implement custom code to load the model
    loaded_model = ...
    
    return loaded_model 

def transform_fn(model, input_data, content_type, accept):
     # decode the input data (e.g. JSON string -> dict)
    data = decoder_encoder.decode(input_data, content_type)

    # call your custom model with the data
    outputs = model(data , ... ) 

    # convert the model output to the desired output format (e.g. dict -> JSON string)
    response = decoder_encoder.encode(output, accept)

    return response
```


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/sagemaker-sdk/deploy-sagemaker-sdk.md" />

### Run training on Amazon SageMaker
https://huggingface.co/docs/sagemaker/tutorials/sagemaker-sdk/training-sagemaker-sdk.md

# Run training on Amazon SageMaker

<iframe width="700" height="394" src="https://www.youtube.com/embed/ok3hetb42gU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


This guide will show you how to train a 🤗 Transformers model with the `HuggingFace` SageMaker Python SDK. Learn how to:

- [Install and setup your training environment](#installation-and-setup).
- [Prepare a training script](#prepare-a-transformers-fine-tuning-script).
- [Create a Hugging Face Estimator](#create-a-hugging-face-estimator).
- [Run training with the `fit` method](#execute-training).
- [Access your trained model](#access-trained-model).
- [Perform distributed training](#distributed-training).
- [Create a spot instance](#spot-instances).
- [Load a training script from a GitHub repository](#git-repository).
- [Collect training metrics](#sagemaker-metrics).

## Installation and setup

Before you can train a 🤗 Transformers model with SageMaker, you need to sign up for an AWS account. If you don't have an AWS account yet, learn more [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-set-up.html).

Once you have an AWS account, get started using one of the following:

- [SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html)
- [SageMaker notebook instance](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html)
- Local environment

To start training locally, you need to setup an appropriate [IAM role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html).

Upgrade to the latest `sagemaker` version:

```bash
pip install sagemaker --upgrade
```

**SageMaker environment**

Setup your SageMaker environment as shown below:

```python
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
```

_Note: The execution role is only available when running a notebook within SageMaker. If you run `get_execution_role` in a notebook not on SageMaker, expect a `region` error._

**Local environment**

Setup your local environment as shown below:

```python
import sagemaker
import boto3

iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='role-name-of-your-iam-role-with-right-permissions')['Role']['Arn']
sess = sagemaker.Session()
```

## Prepare a 🤗 Transformers fine-tuning script

Our training script is very similar to a training script you might run outside of SageMaker. However, you can access useful properties about the training environment through various environment variables (see [here](https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md) for a complete list), such as:

- `SM_MODEL_DIR`: A string representing the path to which the training job writes the model artifacts. After training, artifacts in this directory are uploaded to S3 for model hosting. `SM_MODEL_DIR` is always set to `/opt/ml/model`.

- `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.

- `SM_CHANNEL_XXXX:` A string representing the path to the directory that contains the input data for the specified channel. For example, when you specify `train` and `test` in the Hugging Face Estimator `fit` method, the environment variables are set to `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST`.

The `hyperparameters` defined in the [Hugging Face Estimator](#create-an-huggingface-estimator) are passed as named arguments and processed by `ArgumentParser()`.

```python
import transformers
import datasets
import argparse
import os

if __name__ == "__main__":

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script
    parser.add_argument("--epochs", type=int, default=3)
    parser.add_argument("--per_device_train_batch_size", type=int, default=32)
    parser.add_argument("--model_name_or_path", type=str)

    # data, model, and output directories
    parser.add_argument("--model-dir", type=str, default=os.environ["SM_MODEL_DIR"])
    parser.add_argument("--training_dir", type=str, default=os.environ["SM_CHANNEL_TRAIN"])
    parser.add_argument("--test_dir", type=str, default=os.environ["SM_CHANNEL_TEST"])
```

_Note that SageMaker doesn’t support argparse actions. For example, if you want to use a boolean hyperparameter, specify `type` as `bool` in your script and provide an explicit `True` or `False` value._

Look [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py) for a complete example of a 🤗 Transformers training script.

## Training Output Management

If `output_dir` in the `TrainingArguments` is set to '/opt/ml/model' the Trainer saves all training artifacts, including logs, checkpoints, and models. Amazon SageMaker archives the whole '/opt/ml/model' directory as `model.tar.gz` and uploads it at the end of the training job to Amazon S3. Depending on your Hyperparameters and `TrainingArguments` this could lead to a large artifact (> 5GB), which can slow down deployment for Amazon SageMaker Inference. 
You can control how checkpoints, logs, and artifacts are saved by customization the [TrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments). For example by providing `save_total_limit` as `TrainingArgument` you can control the limit of the total amount of checkpoints. Deletes the older checkpoints in `output_dir` if new ones are saved and the maximum limit is reached.

In addition to the options already mentioned above, there is another option to save the training artifacts during the training session. Amazon SageMaker supports [Checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html), which allows you to continuously save your artifacts during training to Amazon S3 rather than at the end of your training. To enable [Checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html) you need to provide the `checkpoint_s3_uri` parameter pointing to an Amazon S3 location in the `HuggingFace` estimator and set `output_dir` to `/opt/ml/checkpoints`. 
_Note: If you set `output_dir` to `/opt/ml/checkpoints` make sure to call `trainer.save_model("/opt/ml/model")` or model.save_pretrained("/opt/ml/model")/`tokenizer.save_pretrained("/opt/ml/model")` at the end of your training to be able to deploy your model seamlessly to Amazon SageMaker for Inference._

## Create a Hugging Face Estimator

Run 🤗 Transformers training scripts on SageMaker by creating a [Hugging Face Estimator](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html#huggingface-estimator). The Estimator handles end-to-end SageMaker training. There are several parameters you should define in the Estimator:

1. `entry_point` specifies which fine-tuning script to use.
2. `instance_type` specifies an Amazon instance to launch. Refer [here](https://aws.amazon.com/sagemaker/pricing/) for a complete list of instance types.
3. `hyperparameters` specifies training hyperparameters. View additional available hyperparameters in [train.py file](https://github.com/huggingface/notebooks/blob/main/sagemaker/01_getting_started_pytorch/scripts/train.py).

The following code sample shows how to train with a custom script `train.py` with three hyperparameters (`epochs`, `per_device_train_batch_size`, and `model_name_or_path`):

```python
from sagemaker.huggingface import HuggingFace


# hyperparameters which are passed to the training job
hyperparameters={'epochs': 1,
                 'per_device_train_batch_size': 32,
                 'model_name_or_path': 'distilbert-base-uncased'
                 }

# create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='train.py',
        source_dir='./scripts',
        instance_type='ml.g6.12xlarge',
        instance_count=1,
        role=role,
        transformers_version='4.26',
        pytorch_version='1.13',
        py_version='py39',
        hyperparameters = hyperparameters
)
```

If you are running a `TrainingJob` locally, define `instance_type='local'` or `instance_type='local_gpu'` for GPU usage. Note that this will not work with SageMaker Studio.

## Execute training

Start your `TrainingJob` by calling `fit` on a Hugging Face Estimator. Specify your input training data in `fit`. The input training data can be a:

- S3 URI such as `s3://my-bucket/my-training-data`.
- `FileSystemInput` for Amazon Elastic File System or FSx for Lustre. See [here](https://sagemaker.readthedocs.io/en/stable/overview.html?highlight=FileSystemInput#use-file-systems-as-training-inputs) for more details about using these file systems as input.

Call `fit` to begin training:

```python
huggingface_estimator.fit(
  {'train': 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/train',
   'test': 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/test'}
)
```

SageMaker starts and manages all the required EC2 instances and initiates the `TrainingJob` by running:

```bash
/opt/conda/bin/python train.py --epochs 1 --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32
```

## Access trained model

Once training is complete, you can access your model through the [AWS console](https://console.aws.amazon.com/console/home?nc2=h_ct&src=header-signin) or download it directly from S3.

```python
from sagemaker.s3 import S3Downloader

S3Downloader.download(
    s3_uri=huggingface_estimator.model_data, # S3 URI where the trained model is located
    local_path='.',                          # local path where *.targ.gz is saved
    sagemaker_session=sess                   # SageMaker session used for training the model
)
```

## Distributed training

SageMaker provides two strategies for distributed training: data parallelism and model parallelism. Data parallelism splits a training set across several GPUs, while model parallelism splits a model across several GPUs.

### Data parallelism

The Hugging Face [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supports SageMaker's data parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator:

```python
# configuration for running training on smdistributed data parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}

# create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='train.py',
        source_dir='./scripts',
        instance_type='ml.p3dn.24xlarge',
        instance_count=2,
        role=role,
        transformers_version='4.26.0',
        pytorch_version='1.13.1',
        py_version='py39',
        hyperparameters = hyperparameters,
        distribution = distribution
)
```

📓 Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) for an example of how to run the data parallelism library with TensorFlow.

### Model parallelism

The Hugging Face [Trainer] also supports SageMaker's model parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator (see [here](https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html?highlight=modelparallel#required-sagemaker-python-sdk-parameters) for more detailed information about using model parallelism):

```python
# configuration for running training on smdistributed model parallel
mpi_options = {
    "enabled" : True,
    "processes_per_host" : 8
}

smp_options = {
    "enabled":True,
    "parameters": {
        "microbatches": 4,
        "placement_strategy": "spread",
        "pipeline": "interleaved",
        "optimize": "speed",
        "partitions": 4,
        "ddp": True,
    }
}

distribution={
    "smdistributed": {"modelparallel": smp_options},
    "mpi": mpi_options
}

 # create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='train.py',
        source_dir='./scripts',
        instance_type='ml.p3dn.24xlarge',
        instance_count=2,
        role=role,
        transformers_version='4.26.0',
        pytorch_version='1.13.1',
        py_version='py39',
        hyperparameters = hyperparameters,
        distribution = distribution
)
```

📓 Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) for an example of how to run the model parallelism library.

## Spot instances

The Hugging Face extension for the SageMaker Python SDK means we can benefit from [fully-managed EC2 spot instances](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html). This can help you save up to 90% of training costs!

_Note: Unless your training job completes quickly, we recommend you use [checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html) with managed spot training. In this case, you need to define the `checkpoint_s3_uri`._

Set `use_spot_instances=True` and define your `max_wait` and `max_run` time in the Estimator to use spot instances:

```python
# hyperparameters which are passed to the training job
hyperparameters={'epochs': 1,
                 'train_batch_size': 32,
                 'model_name':'distilbert-base-uncased',
                 'output_dir':'/opt/ml/checkpoints'
                 }

# create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='train.py',
        source_dir='./scripts',
        instance_type='ml.g6.12xlarge',
        instance_count=1,
	    checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints'
        use_spot_instances=True,
        # max_wait should be equal to or greater than max_run in seconds
        max_wait=3600,
        max_run=1000,
        role=role,
        transformers_version='4.26',
        pytorch_version='1.13',
        py_version='py39',
        hyperparameters = hyperparameters
)

# Training seconds: 874
# Billable seconds: 262
# Managed Spot Training savings: 70.0%
```

📓 Open the [sagemaker-notebook.ipynb notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) for an example of how to use spot instances.

## Git repository

The Hugging Face Estimator can load a training script [stored in a GitHub repository](https://sagemaker.readthedocs.io/en/stable/overview.html#use-scripts-stored-in-a-git-repository). Provide the relative path to the training script in `entry_point` and the relative path to the directory in `source_dir`.

If you are using `git_config` to run the [🤗 Transformers example scripts](https://github.com/huggingface/transformers/tree/main/examples), you need to configure the correct `'branch'` in `transformers_version` (e.g. if you use `transformers_version='4.4.2` you have to use `'branch':'v4.4.2'`). 

_Tip: Save your model to S3 by setting `output_dir=/opt/ml/model` in the hyperparameter of your training script._

```python
# configure git settings
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} # v4.4.2 refers to the transformers_version you use in the estimator

 # create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='run_glue.py',
        source_dir='./examples/pytorch/text-classification',
        git_config=git_config,
        instance_type='ml.g6.12xlarge',
        instance_count=1,
        role=role,
        transformers_version='4.26',
        pytorch_version='1.13',
        py_version='py39',
        hyperparameters=hyperparameters
)
```

## SageMaker metrics

[SageMaker metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html#define-train-metrics) automatically parses training job logs for metrics and sends them to CloudWatch. If you want SageMaker to parse the logs, you must specify the metric's name and a regular expression for SageMaker to use to find the metric.

```python
# define metrics definitions
metric_definitions = [
    {"Name": "train_runtime", "Regex": "train_runtime.*=\D*(.*?)$"},
    {"Name": "eval_accuracy", "Regex": "eval_accuracy.*=\D*(.*?)$"},
    {"Name": "eval_loss", "Regex": "eval_loss.*=\D*(.*?)$"},
]

# create the Estimator
huggingface_estimator = HuggingFace(
        entry_point='train.py',
        source_dir='./scripts',
        instance_type='ml.g6.12xlarge',
        instance_count=1,
        role=role,
        transformers_version='4.26',
        pytorch_version='1.13',
        py_version='py39',
        metric_definitions=metric_definitions,
        hyperparameters = hyperparameters)
```

📓 Open the [notebook](https://github.com/huggingface/notebooks/blob/main/sagemaker/06_sagemaker_metrics/sagemaker-notebook.ipynb) for an example of how to capture metrics in SageMaker.


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/sagemaker-sdk/training-sagemaker-sdk.md" />

### Quickstart - Deploy Hugging Face Models with SageMaker Jumpstart
https://huggingface.co/docs/sagemaker/tutorials/jumpstart/jumpstart-quickstart.md

# Quickstart - Deploy Hugging Face Models with SageMaker Jumpstart

## Why use SageMaker JumpStart for Hugging Face models?

Amazon SageMaker JumpStart lets you deploy the most-popular open Hugging Face models with one click—inside your own AWS account. JumpStart offers a curated [selection](https://aws.amazon.com/sagemaker-ai/jumpstart/getting-started/?sagemaker-jumpstart-cards.sort-by=item.additionalFields.model-name&sagemaker-jumpstart-cards.sort-order=asc&awsf.sagemaker-jumpstart-filter-product-type=*all&awsf.sagemaker-jumpstart-filter-text=*all&awsf.sagemaker-jumpstart-filter-vision=*all&awsf.sagemaker-jumpstart-filter-tabular=*all&awsf.sagemaker-jumpstart-filter-audio-tasks=*all&awsf.sagemaker-jumpstart-filter-multimodal=*all&awsf.sagemaker-jumpstart-filter-RL=*all&awsm.page-sagemaker-jumpstart-cards=1&sagemaker-jumpstart-cards.q=qwen&sagemaker-jumpstart-cards.q_operator=AND) of model checkpoints for various tasks, including text generation, embeddings, vision, audio, and more. Most models are deployed using the official [Hugging Face Deep Learning Containers](https://huggingface.co/docs/sagemaker/main/en/dlcs/introduction) with a sensible default instance type, so you can move from idea to production in minutes.

In this quickstart guide, we will deploy [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).

## 1. Prerequisites

|   | Requirement |
|---|-------------|
| AWS account with SageMaker enabled | An AWS account that will contain all your AWS resources. |
| An IAM role to access SageMaker AI | Learn more about how IAM works with SageMaker AI in this [guide](https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam.html). |
| SageMaker Studio domain and user profile | We recommend using SageMaker Studio for straightforward deployment and inference. Follow this [guide](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html). |
| Service quotas | Most LLMs need GPU instances (e.g. ml.g5). Verify you have quota for `ml.g5.24xlarge` or [request it](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-requesting-quota-increases.html). | 

## 2· Endpoint deployment

Let's explain how you would deploy a Hugging Face model to SageMaker browsing through the Jumpstart catalog:
1. Open SageMaker → JumpStart.  
2. Filter “Hugging Face” or search for your model (e.g. Qwen2.5-14B).  
3. Click Deploy → (optional) adjust instance size / count → Deploy.  
4. Wait until Endpoints shows In service.  
5. Copy the Endpoint name (or ARN) for later use.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sagemaker/jumpstart-deployment.gif"
     alt="JumpStart deployment demo"
     width="500">

Alternatively, you can also browse through the Hugging Face Model Hub:
1. Open the model page → Click Deploy → SageMaker → Jumpstart tab if model is available.
2. Copy the code snippet and use it from a SageMaker Notebook instance.


<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sagemaker/hf-jumpstart-deployment.gif"
     alt="JumpStart deployment demo"
     width="500">

```python
# SageMaker JumpStart provides APIs as part of SageMaker SDK that allow you to deploy and fine-tune models in network isolation using scripts that SageMaker maintains.

from sagemaker.jumpstart.model import JumpStartModel


model = JumpStartModel(model_id="huggingface-llm-qwen2-5-14b-instruct")
example_payloads = model.retrieve_all_examples()

predictor = model.deploy()

for payload in example_payloads:
    response = predictor.predict(payload.body)
    print("Input:\n", payload.body[payload.prompt_key])
    print("Output:\n", response[0]["generated_text"], "\n\n===============\n")
```

The endpoint creation can take several minutes, depending on the size of the model.

## 3. Test interactively

If you deployed through the console, you need to grab the endpoint ARN and reuse in your code.
```python
from sagemaker.predictor import retrieve_default
endpoint_name = "MY ENDPOINT NAME"
predictor = retrieve_default(endpoint_name)
payload = {
    "messages": [
        {
            "role": "system",
            "content": "You are a passionate data scientist."
        },
        {
            "role": "user",
            "content": "what is machine learning?"
        }
    ],
    "max_tokens": 2048,
    "temperature": 0.7,
    "top_p": 0.9,
    "stream": False
}

response = predictor.predict(payload)
print(response)
```

The endpoint support the Open AI API specification. 

## 4. Clean‑up

To avoid incurring unnecessary costs, when you’re done, delete the SageMaker endpoints in the Deployments → Endpoints console or using the following code snippets:
```python
predictor.delete_model()
predictor.delete_endpoint()
```

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/jumpstart/jumpstart-quickstart.md" />

### EC2, ECS and EKS Quickstart
https://huggingface.co/docs/sagemaker/tutorials/compute-services/compute-services-quickstart.md

# EC2, ECS and EKS Quickstart

This page is under construction, bear with us!

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/compute-services/compute-services-quickstart.md" />

### Quickstart — Using Hugging Face Models with Amazon Bedrock Marketplace
https://huggingface.co/docs/sagemaker/tutorials/bedrock/bedrock-quickstart.md

# Quickstart — Using Hugging Face Models with Amazon Bedrock Marketplace

## Why use Bedrock Marketplace for Hugging Face models?
Amazon Bedrock now exposes Hugging Face open-weight models—including Gemma, Llama 3, Mistral, and more—through a single catalog. You invoke them with the same Bedrock APIs you already use for Titan, Anthropic, Cohere, etc. Under the hood, Bedrock Marketplace model endpoints are managed by Amazon SageMaker AI. With Bedrock Marketplace, you can now combine the ease of use of SageMaker JumpStart with the fully managed infrastructure of Amazon Bedrock, including compatibility with high-level APIs such as Agents, Knowledge Bases, Guardrails and Model Evaluations.

## 1 . Prerequisites

|  | Requirement | Notes |
|---|-------------|
| AWS account in a Bedrock Region | Marketplace is regional; switch the console to one of the 14 supported Regions first, for example `us-east-1`. |
| Permissions | For a quick trial, attach `AmazonBedrockFullAccess` and `AmazonSageMakerFullAccess`.|
| Service quotas | The SageMaker endpoint uses GPU instances (for example ml.g5). Verify you have quota or request it. |
| JumpStart-only | If you choose path B, create a SageMaker Studio domain and user profile first (Console ▸ SageMaker ▸ Domains). Open Studio before continuing. |

When registering your Sagemaker Jumpstart endpoints in Amazon Bedrock, you only pay for the SageMaker compute resources and regular Amazon Bedrock APIs prices are applicable.

## 2. Endpoint deployment

There are two equivalent paths to use a Hugging Face model with Amazon Bedrock Marketplace.

Path A is from the Bedrock *Model Catalog*:
1. Console → Amazon Bedrock → Foundation Models → Model catalog  
2. Filter Provider → “Hugging Face”, then pick your model (e.g., Gemma 2 27B Instruct)  
3. If you see Subscribe, review pricing & terms, click Subscribe, then continue  
4. Click Deploy → name the endpoint → keep the recommended instance → accept the EULA → Deploy  
5. Wait for Foundation Models → Marketplace deployments to show status In service (takes a few minutes)  
6. Click the deployment name and copy the SageMaker endpoint ARN — you’ll need it for API calls

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sagemaker/bedrock-marketplace-deployment.gif"
     alt="Bedrock deployment demo"
     width="500">

Path B is from SageMaker JumpStart for the model that shows “Use with Bedrock”:
1. In SageMaker Studio, open JumpStart  
2. Filter Bedrock Ready models → select the model card (e.g., Gemma 2 9B Instruct)  
3. Click Deploy, accept the EULA, keep defaults, Deploy  
4. Studio → Deployments → Endpoints → wait for status In service  
5. Click the endpoint, choose Use with Bedrock
6. In the Bedrock console, review and Register → a new entry appears under Marketplace deployments  
7. Open that entry and copy the SageMaker endpoint ARN for code samples  

## 3 . Test interactively 

To test the model interactively in the console, select the model under Marketplace deployments, open it in the playground, and send a prompt in Chat/Text mode to verify the model's response.

Alternatively, you can programmatically access your endpoint.

```python
import boto3

bedrock = boto3.client("bedrock-runtime")

# Paste the endpoint ARN you copied above
endpoint_arn = "arn:aws:sagemaker:<region>:<account‑id>:endpoint/<name>"

inference_cfg = {"maxTokens": 256, "temperature": 0.1, "topP": 0.95}
extra = {"parameters": {"repetition_penalty": 1.05}}

response = bedrock.converse(
    modelId=endpoint_arn,                  # <- SageMaker endpoint ARN
    messages=[{
        "role": "user",
        "content": [{"text": "Give me three taglines for a serverless AI startup"}]
    }],
    inferenceConfig=inference_cfg,
    additionalModelRequestFields=extra,
)

print(response["output"]["message"]["content"][0]["text"])
```

*Heads‑up*: the same `modelId=endpoint_arn` works with **InvokeModel**, **Knowledge Bases (RetrieveAndGenerate)**, **Agents**, and **Guardrails**—no code changes.

## 4 . Clean‑up (stop charges)

| Resource | How to delete |
|----------|---------------|
| SageMaker endpoint | Console → Marketplace deployments → select → Delete (also de‑registers it) • *or* `boto3.client("sagemaker").delete_endpoint(...)` |
| Optional extras | Delete Knowledge Base, Guardrail, or S3 vectors if you created them. |

For more information, refer to the [Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html).


<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/tutorials/bedrock/bedrock-quickstart.md" />

### Introduction
https://huggingface.co/docs/sagemaker/dlcs/introduction.md

# Introduction

Hugging Face built Deep Learning Containers (DLCs) for Amazon Web Services customers to run any of their machine learning workload in an optimized environment, with no configuration or maintenance on their part. These are Docker images pre-installed with deep learning frameworks and libraries such as 🤗 Transformers, 🤗 Datasets, and 🤗 Tokenizers. The DLCs allow you to directly serve and train any models, skipping the complicated process of building and optimizing your serving and training environments from scratch.

The containers are publicly maintained, updated and released periodically by Hugging Face and the AWS team and available for all AWS customers within the AWS’s Elastic Container Registry. They can be used from any AWS service such as:
* **Amazon Sagemaker AI**: Amazon SageMaker AI is a fully managed machine learning (ML) platform for data scientists and developers to quickly and confidently build, train, and deploy ML models into a production-ready hosted environment.
* **Amazon Bedrock**: Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API to build generative AI applications.
* **Amazon Elastic Kubernetes Service (EKS)**: Amazon EKS is the premiere platform for running Kubernetes clusters in the AWS cloud.
* **Amazon Elastic Container Service (ECS)**: Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
* **Amazon Elastic Compute Cloud (EC2)**: Amazon EC2 provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud.

Hugging Face DLCs are open source and licensed under Apache 2.0. Feel free to reach out on our [community forum](https://discuss.huggingface.co/c/sagemaker/17) if you have any questions.

## Features & benefits

The Hugging Face DLCs provide ready-to-use, tested environments to train and deploy Hugging Face models.

### One command is all you need

With the new Hugging Face DLCs, train and deploy cutting-edge Transformers-based NLP models in a single line of code. The Hugging Face PyTorch DLCs for training come with all the libraries installed to run a single command e.g. via [TRL CLI](https://huggingface.co/docs/trl/en/clis) to fine-tune LLMs on any setting, either single-GPU, single-node multi-GPU, and more.

### Accelerate machine learning from science to production

In addition to Hugging Face DLCs, we created a first-class Hugging Face library for inference, [`sagemaker-huggingface-inference-toolkit`](https://github.com/aws/sagemaker-huggingface-inference-toolkit/tree/main/src/sagemaker_huggingface_inference_toolkit), that comes with the Hugging Face PyTorch DLCs for inference, with full support on serving any PyTorch model on AWS.

Deploy your trained models for inference with just one more line of code or select any of the ever growing publicly available models from the model Hub.

### High-performance text generation and embedding

Besides the PyTorch-oriented DLCs, Hugging Face also provides high-performance inference for both text generation and embedding models via the Hugging Face DLCs for both Text Generation Inference (TGI) and Text Embeddings Inference (TEI), respectively.

The Hugging Face DLC for TGI enables you to deploy any of the +225,000 text generation inference supported models from the Hugging Face Hub, or any custom model as long as its architecture is supported within TGI.

The Hugging Face DLC for TEI enables you to deploy any of the +12,000 embedding, re-ranking or sequence classification supported models from the Hugging Face Hub, or any custom model as long as its architecture is supported within TEI.

Additionally, these DLCs come with full support for AWS meaning that deploying models from Amazon Simple Storage Service (S3) is also straight forward and requires no configuration.

### Built-in performance

Hugging Face DLCs feature built-in performance optimizations for PyTorch to train models faster. The DLCs also give you the flexibility to choose a training infrastructure that best aligns with the price/performance ratio for your workload.

Hugging Face Inference DLCs provide you with production-ready endpoints that scale quickly with your AWS environment, built-in monitoring, and a ton of enterprise features.

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/dlcs/introduction.md" />

### Available DLCs on AWS
https://huggingface.co/docs/sagemaker/dlcs/available.md

# Available DLCs on AWS

Below you can find a listing of all the Deep Learning Containers (DLCs) available on AWS.

For each supported combination of use-case (training, inference), accelerator type (CPU, GPU, Neuron), and framework (PyTorch, TGI, TEI) containers are created.

Neuron DLCs for training and inference on AWS Trainium and AWS Inferentia instances can be found in the [Optimum Neuron documentation](https://huggingface.co/docs/optimum-neuron/en/containers).

## Training

Pytorch Training DLC: For training, our DLCs are available for PyTorch via Transformers. They include support for training on GPUs and AWS AI chips with libraries such as TRL, Sentence Transformers, or Diffusers.

You can also keep track of the latest Pytorch Training DLC releases [here](https://github.com/aws/deep-learning-containers/releases?q=huggingface-training+AND+NOT+neuronx&expanded=true).

| Container URI                                                                                                                    | Accelerator |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:2.8.0-transformers4.56.2-gpu-py312-cu129-ubuntu22.04 | GPU         |
| 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-training-neuronx:2.7.0-transformers4.51.0-neuronx-py310-sdk2.24.1-ubuntu22.04 | Neuron         |

## Inference

### Pytorch Inference DLC

For inference, we have a general-purpose PyTorch inference DLC, for serving models trained with any of those frameworks mentioned before on CPU, GPU, and AWS AI chips.

You can also keep track of the latest Pytorch Inference DLC releases [here](https://github.com/aws/deep-learning-containers/releases?q=huggingface-inference+AND+NOT+tgi+AND+NOT+neuronx&expanded=true).

| Container URI                                                                                                                    | Accelerator |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:2.6.0-transformers4.51.3-cpu-py312-ubuntu22.04- | CPU         |
| 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:2.6.0-transformers4.51.3-gpu-py312-cu124-ubuntu22.04 | GPU         |
| 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference-neuronx:2.7.1-transformers4.51.3-neuronx-py310-sdk2.24.1-ubuntu22.04 | Neuron         |

### LLM TGI

There is also the LLM Text Generation Inference (TGI) DLC for high-performance text generation of LLMs on GPU and AWS AI chips.

You can also keep track of the latest LLM TGI DLC releases [here](https://github.com/aws/deep-learning-containers/releases?q=tgi+AND+gpu&expanded=true).

| Container URI                                                                                                                    | Accelerator |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.7.0-tgi3.3.6-gpu-py311-cu124-ubuntu22.04 | GPU         |
| 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-tgi-inference:2.7.0-optimum3.3.6-neuronx-py310-ubuntu22.04 | Neuron         |

### Text Embedding Inference

Finally, there is a Text Embeddings Inference (TEI) DLC for high-performance serving of embedding models on CPU and GPU.

| Container URI                                                                                                                    | Accelerator |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| 683313688378.dkr.ecr.us-east-1.amazonaws.com/2.0.1-tei1.8.2-cpu-py310-ubuntu22.04 | CPU         |
| 683313688378.dkr.ecr.us-east-1.amazonaws.com/2.0.1-tei1.8.2-gpu-py310-cu122-ubuntu22.04 | GPU         |

## FAQ

**How to choose the right inference container for my use case?**

![inference-dlc-decision-tree](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sagemaker/inference-dlc-decision-tree.png)

*Note:* See [here](https://huggingface.co/docs/sagemaker/main/en/reference/inference-toolkit) for the list of supported task in the inference toolkit.

*Note:* Browse through the Hub to see if you model is tagged ["text-generation-inference"](https://huggingface.co/models?other=text-generation-inference) or ["text-embeddings-inference"](https://huggingface.co/models?other=text-embeddings-inference)

**How to find the URI of my container?**

The URI is built with an AWS account ID and an AWS region. Those two values need to be replaced depending on your use case.
Let's say you want to use the training DLC for GPUs in  
- `dlc-aws-account-id`: The AWS account ID of the account that owns the ECR repository. You can find them in the [here](https://github.com/aws/sagemaker-python-sdk/blob/e0b9d38e1e3b48647a02af23c4be54980e53dc61/src/sagemaker/image_uri_config/huggingface.json#L21)
- `region`: The AWS region where you want to use it.

**How to find the URI of my container but simpler?**

The Python SagemMaker SDK util functions are not always up to date but it is much simpler than reconstructing the image URI yourself. 

```python
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri

print(f"TGI GPU: {get_huggingface_llm_image_uri('huggingface')}")
print(f"TEI GPU: {get_huggingface_llm_image_uri('huggingface-tei')}")
print(f"TEI CPU: {get_huggingface_llm_image_uri('huggingface-tei-cpu')}")
print(f"TGI Neuron: {get_huggingface_llm_image_uri('huggingface-neuronx')}")
```

For Pytorch Training and Pytorch Inference DLCs, there is no such utility. 

<EditOnGithub source="https://github.com/huggingface/hub-docs/blob/main/docs/sagemaker/source/dlcs/available.md" />
