# Optimum.Neuron

## Docs

- [EC2 Setup](https://huggingface.co/docs/optimum.neuron/v0.4.0/ec2-setup.md)
- [Supported architectures](https://huggingface.co/docs/optimum.neuron/v0.4.0/supported_architectures.md)
- [Optimum Neuron Container](https://huggingface.co/docs/optimum.neuron/v0.4.0/containers.md)
- [Quickstart](https://huggingface.co/docs/optimum.neuron/v0.4.0/quickstart.md)
- [🤗 Optimum Neuron](https://huggingface.co/docs/optimum.neuron/v0.4.0/index.md)
- [Llama-3.3-70b performance on AWS Inferentia2 (Latency & Throughput)](https://huggingface.co/docs/optimum.neuron/v0.4.0/benchmarks/inferentia-llama3.3-70b.md)
- [Llama-3.1-8b performance on AWS Inferentia2 (Latency & Throughput)](https://huggingface.co/docs/optimum.neuron/v0.4.0/benchmarks/inferentia-llama3.1-8b.md)
- [Model Weight Transformation Specs](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/transformations.md)
- [LoRA for Neuron](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/lora.md)
- [Neuron TRL Trainers](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/trl_trainers.md)
- [NeuronTrainer](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/trainer.md)
- [Setting up your development environment](https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/dev_environment.md)
- [Contributing Custom Models for Training](https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/contribute_for_training.md)
- [Adding support for new architectures](https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/contribute_for_inference.md)
- [Export a model to Neuron](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/export_model.md)
- [Introduction](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/benchmark.md)
- [Neuron Model Cache](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/cache_system.md)
- [optimum-neuron plugin for vLLM](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/vllm_plugin.md)
- [Inference pipelines with AWS Neuron (Inf2/Trn1)](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/pipelines.md)
- [Distributed Training with `optimum-neuron`](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/distributed_training.md)
- [NeuronX Text-generation-inference for AWS inferentia2](https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/neuronx_tgi.md)
- [🚀  Tutorials: How To Fine-tune & Run LLMs](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_llms_overview.md)
- [Getting started with AWS Trainium and Hugging Face Transformers](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/fine_tune_bert.md)
- [🚀 Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod with Pre-built Containers](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/pretraining_hyperpod_llm.md)
- [🚀 Instruction Fine-Tuning of Llama 3.1 8B with LoRA](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_llama.md)
- [🚀 Fine-Tune Qwen3 8B with LoRA](https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_qwen3.md)
- [Models](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/modeling_auto.md)
- [IP-Adapter](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/ip_adapter.md)
- [PixArt-Σ](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pixart_sigma.md)
- [Load adapters](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/lora.md)
- [PixArt-α](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pixart_alpha.md)
- [Latent Consistency Models](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/lcm.md)
- [InstructPix2Pix](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pix2pix.md)
- [Flux](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/flux.md)
- [Stable Diffusion XL Turbo](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/sdxl_turbo.md)
- [Stable Diffusion](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/stable_diffusion.md)
- [ControlNet](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/controlnet.md)
- [Stable Diffusion XL](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/stable_diffusion_xl.md)
- [YOLOS](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/yolos.md)
- [BERT](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/bert.md)
- [Whisper](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/whisper.md)
- [CLIP](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/clip.md)
- [Sentence Transformers 🤗](https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/sentence_transformers/overview.md)
- [Create your own chatbot with llama-2-13B on AWS Inferentia](https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/llama2-13b-chatbot.md)
- [Deploy Llama 3.3 70B on AWS Inferentia2](https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/deploy-llama-3-3-70b.md)
- [Sentence Transformers on AWS Inferentia with Optimum Neuron](https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/sentence_transformers.md)
- [Notebooks](https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/notebooks.md)
- [Deploy Mixtral 8x7B on AWS Inferentia2](https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/deploy-mixtral-8x7b.md)

### EC2 Setup
https://huggingface.co/docs/optimum.neuron/v0.4.0/ec2-setup.md

# EC2 Setup

This guide will help you get Optimum Neuron up and running. There are two main approaches:

1. **🚀 Recommended: AWS EC2 with Deep Learning AMI** - The simplest way to get started with pre-configured environment
2. **⚙️ Manual Installation** - Install Optimum Neuron on existing infrastructure

## Recommended: AWS EC2 with Deep Learning AMI

The simplest way to work with AWS Trainium or Inferentia and Optimum Neuron on Amazon EC2 is the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) (DLAMI). The DLAMI comes with all required libraries pre-packaged for you, including the Optimum Neuron, Neuron Drivers, Transformers, Datasets, and Accelerate. The HF DLAMI is provided at no additional charge to Amazon EC2 users.

Optimum Neuron supports Inf1, Inf2, Trn1 and Trn2, all accessible on Amazon EC2. You can find all the specifications of the Trn and Inf instances [here](https://aws.amazon.com/ec2/instance-types/), in the "Accelerated Computing" section.

In this section, we will show you:
1. [How to create an AWS Trainium or Inferentia instance on Amazon EC2 with the HF DLAMI](#create-an-aws-trainium-or-inferentia-instance-on-amazon-ec2-with-the-hf-dlami)
    1. [Find a supported region](#find-a-supported-region)
    2. [Increase service quota](#increase-service-quota)
    3. [Launch the Amazon EC2 instance with the HF DLAMI](#launch-the-amazon-ec2-instance-with-the-hf-dlami)
    4. [Connect through SSH](#connect-through-ssh)
2. [How to set up your remote development environment](#set-up-your-remote-development-environment)
    1. [Access through Jupyter Notebook](#access-through-jupyter-notebook)
    2. [Access through VS Code remote server](#access-through-vs-code-remote-server)

### Create an AWS Trainium or Inferentia instance on Amazon EC2 with the HF DLAMI

Before creating the EC2 instance, make sure you are in a supported region for the instance you selected and that you have quota in your AWS account.

#### Find a supported region

Here is the list of regions that support at least one type of Trainium or Inferentia2 instance, as of February 2025:
- us-east-1: US East (N. Virginia)
- us-east-2: US East (Ohio)
- us-west-2: US West (Oregon)
- ap-south-1: Asia Pacific (Mumbai)
- ap-northeast-1: Asia Pacific (Tokyo)
- ap-southeast-1: Asia Pacific (Singapore)
- ap-southeast-2: Asia Pacific (Sydney)
- ap-southeast-4: Asia Pacific (Melbourne)
- eu-north-1: Europe (Stockholm)
- eu-west-3: Europe (Paris)
- eu-west-2: Europe (London)
- eu-west-1: Europe (Ireland)
- eu-central-1: Europe (Frankfurt)
- sa-east-1: South America (Sao Paulo)

Here is a Python script that lets you pull the latest supported instance type in each region you have enabled:

```python
import boto3
from datetime import datetime

ec2 = boto3.client('ec2')

regions = [region['RegionName'] for region in ec2.describe_regions()['Regions']]


#Edit this line to change the instance types displayed
instance_types = ['trn1.32xlarge', 'trn1.2xlarge', 'inf2.48xlarge', 'inf2.24xlarge', 'inf2.8xlarge', 'inf2.xlarge', 'trn2.48xlarge']

supported_regions = {}

for region in regions:
   ec2_region = boto3.client('ec2', region_name=region)
   response = ec2_region.describe_instance_type_offerings(
      #LocationType='availability-zone',
      Filters=[
        {'Name': 'instance-type', 'Values': instance_types},
      ]
   )
   if response['InstanceTypeOfferings']:
      supported_regions[region] = [offer['InstanceType'] for offer in response['InstanceTypeOfferings']]

print('# Supported Regions as of',datetime.now().strftime('%B %d, %Y'))
print('================')


client = boto3.client('ssm')

for region, instance_types in supported_regions.items():
    try:
        response = client.get_parameter(Name=f'/aws/service/global-infrastructure/regions/{region}/longName')
        region_long_name = response['Parameter']['Value']
    except (client.exceptions.ParameterNotFound, KeyError):
        region_long_name = region
    print(f' * {region}: {region_long_name}')
    for instance_type in instance_types:
      print(f'  - {instance_type}')
    print('\n')
```

#### Increase service quota

Now that you selected your region and that you switched to it, you can request a Service Quota increase through the AWS Console by navigating to Service Quota, AWS services in the left panel, search for Amazon EC2, then "trn" or "inf". You are able to request quota increase for On-Demand and Spot instances separately.

By default, all quotas are 0 for Inferentia and Trainium. There is no charge for increased quotas. There are separate quotas for Inferentia and Trainium, and separate quotas for spot and on-demand. Quotas refer to the maximum TOTAL number of vCPUs assigned to each instance type.

For example, a quota of 192 will let you run a single inf2.48xlarge, two inf2.24xlarges, six inf2.8xlarges, or forty-eight of the inf2.xlarges. It will also let you run inf1 instance types. Similarly for Trainium, a quota of 128 will let you run a single trn1n.32xlarge or trn1.32xlarge, but it will also let you run sixteen trn1.2xlarge.

#### Launch the Amazon EC2 instance with the HF DLAMI

Let's deploy a trn1.2xlarge instance in the us-east-1 region (North Virginia) through the EC2 console.

First, click on **Launch instance** and define a name for the instance (`trainium-huggingface-demo`).

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/01-name-instance.png"
  alt="name instance"
/>

Next, you search the Amazon Marketplace for Hugging Face AMIs. Entering "Hugging Face" in the search bar for "Application and OS Images" and hitting "enter".

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/02-search-ami.png"
  alt="search ami"
/>

This should now open the "Choose an Amazon Machine Image" view with the search. You can now navigate to "AWS Marketplace AMIs" and find the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) and click select.

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/03-select-ami.png"
  alt="select ami"
/>

_You will be asked to subscribe if you aren't. The AMI is completely free of charge, and you will only pay for the EC2 compute._

Then you need to define a key pair, which will be used to connect to the instance via `ssh`. You can create one in place if you don't have a key pair.

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/04-select-key.png"
  alt="select ssh key"
/>

After that, create or select a [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) which allows `ssh` traffic.

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/05-select-sg.png"
  alt="select security group"
/>

You are ready to launch the instance. Therefore click on "Launch Instance" on the right side.

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/06-launch-instance.png"
  alt="select ssh key"
/>

AWS will now provision the instance using the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2).

#### Connect through SSH

Once the instance is ready, you can view and copy the public IPv4 address to `ssh` into the machine.

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/setup_aws_instance/07-copy-dns.png"
  alt="select public dns"
/>

Replace the empty strings `""` in the snippet below with the IP address of your instances and the path to the key pair you created/selected when launching the instance.

```bash
PUBLIC_DNS="" # IP address
KEY_PATH="" # local path to key pair

ssh -i $KEY_PATH ubuntu@$PUBLIC_DNS
```

Once you are connected, you can run `neuron-ls` to ensure you have access to the Trainium accelerators. You should see a similar output than below.

```json
ubuntu@ip-172-31-79-164:~$ neuron-ls
instance-type: trn1.2xlarge
instance-id: i-0570615e41700a481
+--------+--------+--------+---------+
| NEURON | NEURON | NEURON |   PCI   |
| DEVICE | CORES  | MEMORY |   BDF   |
+--------+--------+--------+---------+
| 0      | 2      | 32 GB  | 00:1e.0 |
+--------+--------+--------+---------+
```

### Set up your remote development environment

We will walk through setting up Jupyter Notebooks or VS Code remote server on the Amazon EC2 instance.

These two methods require an SSH connection of some kind. These instructions were written for a Mac, but should work on a Linux system as well. A PC may require using Putty.

You should have a .pem file that you created when you deployed your instance or had from previous deployments. You can connect to your system using:
```bash
ssh -i "/path/to/sshkey.pem" ubuntu@instance_ip_address
```

#### Access through Jupyter Notebook

This method involves running the Jupyter notebook server on the Neuron instance, mapping a port locally, then using the browser on your desktop to access the notebook server.

Start by mapping a port on your local machine to the Neuron instance. From a terminal on your system, run
```bash
ssh -i "/path/to/sshkey.pem" -N -f -L localhost:8888:localhost:8888 ubuntu@instance_ip_address
```

Then connect to your Amazon EC2 instance using SSH from your computer. Once connected, from the command prompt, run
```bash
nohup jupyter notebook --no-browser --port=8888
```

After a few seconds, check the nohup.out file to find your server's token:
```bash
cat nohup.out | grep localhost
```

Copy the connection string and paste it into your browser. After a few seconds, you should eventually see the Jupyter Notebook browser. It should look like http://localhost:8888/tree?token=337fc8de2aenot_a_real_tokene952c43946e4fb57131

This process works because you have mapped the 8888 port on your local machine to the 8888 port on the Neuron instance, so when you connect to localhost:8888, you end up accessing the Jupyter server on the Neuron instance.

If you have problems, make sure the initial port mapping was successful. If you already have something running on port 8888 on your machine, this may give you an error. You can always change the port (e.g. 8885) in all the instructions if you need to.

#### Access through VS Code remote server

With Visual Studio Code installed on your local machine, you can use the Remote-SSH command to edit and run files that are stored on a Neuron instance. See the VS Code article for additional details.

1. Select Remote-SSH: Connect to Host... from the Command Palette (F1, ⇧⌘P)
2. Enter in the full connection string from the ssh section above: ssh -i "/path/to/sshkey.pem" ubuntu@instance_ip_address
3. VS Code should connect and automatically set up the VS Code server.
4. Eventually, you should be prompted for a base directory. You can browse to a directory on the Neuron instance.
5. In case you find that some commands seem greyed out in the menus, but the keyboard commands still work (⌘S to save or ^⇧` for terminal), you may need to restart VS Code.

## Alternative: Manual Installation

Manual installation is useful in several scenarios:

- **Using a newer version**: Install the latest Optimum Neuron version that may not yet be available in the DLAMI
- **Custom AMI requirements**: Working with your organization's standard AMI or security-hardened images
- **Existing infrastructure**: Adding Neuron support to pre-configured environments or Docker containers
- **Development setup**: Installing pre-release or development versions for testing
- **Minimal installations**: Creating lightweight environments with only required dependencies

If you choose manual installation, you will need to ensure the Neuron drivers and tools are properly installed first.

Before installing `optimum-neuron` make sure that you have installed the Neuron driver and tools, check out [more detailed guide here](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/torch-neuronx.html#setup-torch-neuronx).

### Adding pip packages URL

Pointing to the AWS Neuron repository:

```bash
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
```

### Installing `optimum-neuron` for AWS Trainium (`trn1`) or AWS inferentia2 (`inf2`)

```bash
python -m pip install optimum-neuron[neuronx]
```

### Installing `optimum-neuron` for AWS inferentia (`inf1`)

```bash
python -m pip install optimum-neuron[neuron]
```

## What's Next?

Now that you have Optimum Neuron set up, check out the **[Quickstart Guide](./quickstart)** to learn the basics of training and inference with Optimum Neuron.

### Supported architectures
https://huggingface.co/docs/optimum.neuron/v0.4.0/supported_architectures.md

# Supported architectures

## Training

Training on AWS Trainium instances (Trn1) enables large-scale model training with distributed parallelism strategies.

**Requirements:**
- Model must be compatible with the Neuron SDK. If it small enough to fit within 16GB, training is supported for any architecture that can be successfully compiled.
- **Memory constraint:** Each accelerator has 16GB of memory for model weights, gradients, optimizer states, and activations.
- **For large models:** Custom modeling implementation with tensor parallelism and/or pipeline parallelism support is required.

The following architectures have custom modeling implementations with distributed training support:

| Architecture             | Task            | Tensor Parallelism | Pipeline Parallelism |
|--------------------------|-----------------|--------------------|----------------------|
| Llama, Llama 2, Llama 3  | text-generation | ✓                  | ✓                    |
| Qwen3                    | text-generation | ✓                  | ✓                    |
| Granite                  | text-generation | ✓                  | ✗                    |


<Tip>

If you need to add support for a custom model not listed above, check out our [contribute for training guide](./contribute/contribute_for_training) to learn how to implement custom modeling with distributed training support. You can also open an issue in the [Optimum Neuron GitHub repository](https://github.com/huggingface/optimum-neuron/issues) to request support for it.

</Tip>


## Inference

The following table lists the architectures and tasks that Optimum Neuron supports for inference on Amazon EC2 Inf2 instances.

<Tip>

If a LLM is listed, e.g. a model with a `text-generation` task, it means that there is also [TGI](https://huggingface.co/docs/text-generation-inference/en/index) support for it.

</Tip>

### Transformers

| Architecture              | Task                                                                                                                                          |
|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| ALBERT                    | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| AST                       | feature-extraction, audio-classification                                                                                                      |
| BERT                      | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| Beit                      | feature-extraction, image-classification                                                                                                      |
| CamemBERT                 | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| CLIP                      | feature-extraction, image-classification                                                                                                      |
| ConvBERT                  | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| ConvNext                  | feature-extraction, image-classification                                                                                                      |
| ConvNextV2                | feature-extraction, image-classification                                                                                                      |
| CvT                       | feature-extraction, image-classification                                                                                                      |
| DeBERTa (INF2 only)       | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| DeBERTa-v2 (INF2 only)    | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| Deit                      | feature-extraction, image-classification                                                                                                      |
| DistilBERT                | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| DonutSwin                 | feature-extraction                                                                                                                            |
| Dpt                       | feature-extraction                                                                                                                            |
| ELECTRA                   | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| ESM                       | feature-extraction, fill-mask, text-classification, token-classification                                                                      |
| FlauBERT                  | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| Granite                   | text-generation                                                                                                                               |
| Hubert                    | feature-extraction, automatic-speech-recognition, audio-classification                                                                        |
| Levit                     | feature-extraction, image-classification                                                                                                      |
| Llama, Llama 2, Llama 3   | text-generation                                                                                                                               |
| Llama 4                   | text-generation                                                                                                                               |
| Mixtral                   | text-generation                                                                                                                               |
| MobileBERT                | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| MobileNetV2               | feature-extraction, image-classification, semantic-segmentation                                                                               |
| MobileViT                 | feature-extraction, image-classification, semantic-segmentation                                                                               |
| ModernBERT                | feature-extraction, fill-mask, text-classification, token-classification                                                                      |
| MPNet                     | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| Phi3                      | text-generation                                                                                 |
| Phi                       | feature-extraction, text-classification, token-classification                                                                                 |
| Qwen2, Qwen3, Qwen3Moe    | text-generation                                                                                 |
| RoBERTa                   | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| RoFormer                  | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| SmolLM3                   | text-generation                                                                                                                               |
| Swin                      | feature-extraction, image-classification                                                                                                      |
| T5                        | text2text-generation                                                                                                                          |
| UniSpeech                 | feature-extraction, automatic-speech-recognition, audio-classification                                                                        |
| UniSpeech-SAT             | feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector                             |
| ViT                       | feature-extraction, image-classification                                                                                                      |
| Wav2Vec2                  | feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector                             |
| WavLM                     | feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector                             |
| Whisper                   | automatic-speech-recognition                                                                                                                  |
| XLM                       | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| XLM-RoBERTa               | feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification                                 |
| Yolos                     | feature-extraction, object-detection                                                                                                          |


### Diffusers

| Architecture                  | Task                                                                                                                                   |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Stable Diffusion              | text-to-image, image-to-image, inpaint                                                                                                 |
| Stable Diffusion XL Base      | text-to-image, image-to-image, inpaint                                                                                                 |
| Stable Diffusion XL Refiner   | image-to-image, inpaint                                                                                                                |
| SDXL Turbo                    | text-to-image, image-to-image, inpaint                                                                                                 |
| LCM                           | text-to-image                                                                                                                          |
| PixArt-α                      | text-to-image                                                                                                                          |
| PixArt-Σ                      | text-to-image                                                                                                                          |
| Flux                          | text-to-image, inpaint                                                                                                                 |
| Flux Kontext                  | text-to-image, image-to-image                                                                                                          |

### Sentence Transformers

| Architecture                  | Task                                                                                                                                   |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Transformer                   | feature-extraction, sentence-similarity                                                                                                |
| CLIP                          | feature-extraction, zero-shot-image-classification                                                                                     |


<Tip>

 To learn how to export a model for inference, you can check this [guide](https://huggingface.co/docs/optimum-neuron/guides/export_model#selecting-a-task).

</Tip>

### Optimum Neuron Container
https://huggingface.co/docs/optimum.neuron/v0.4.0/containers.md

# Optimum Neuron Container

We provide pre-built Optimum Neuron containers for Amazon SageMaker. These containers come with all of the Hugging Face libraries and dependencies pre-installed, so you can start using them right away.
We have containers for training and inference, and optimized text generation containers with TGI. The table is up to date and only includes the latest versions of each container. You can find older versions in the [Deep Learning Container Release Notes](https://github.com/aws/deep-learning-containers/releases?q=hf-neuronx&expanded=true)

We include the function `image_uri` to retrieve the image URI for the container you want to use. The result is the same as the one retrieved by the `sagemaker` Python SDK, but the image URI retrieved can be newer than the one reported by the `sagemaker` Python SDK.
```python
from optimum.neuron.utils import ecr

# retrieve the llm image uri
llm_image = ecr.image_uri("tgi")

print(f"llm image uri: {llm_image}")

```

## Available Optimum Neuron Containers

| Type                       | Optimum Version | Image URI                                   |
|-----------------------------|-----------------|---------------------------------------------|
| Training  | 0.0.25           | `763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-training-neuronx:2.1.2-transformers4.48.1-neuronx-py310-sdk2.20.0-ubuntu20.04`   |
| Inference      | 0.0.25           | `763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference-neuronx:2.1.2-transformers4.43.2-neuronx-py310-sdk2.20.0-ubuntu20.04`      |
| Text Generation Inference        | 0.2.0           | `763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-tgi-inference:2.5.1-optimum3.3.4-neuronx-py310-ubuntu22.04`        |


Please replace `763104351884` with the correct [AWS account ID](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/image_uri_config/huggingface-neuronx.json) and `region` with the AWS region you are working in.

### Quickstart
https://huggingface.co/docs/optimum.neuron/v0.4.0/quickstart.md

# Quickstart

🤗 Optimum Neuron makes AWS accelerator adoption seamless for Hugging Face users with **drop-in replacements** for standard training and inference components.

***🚀 Need to set up your environment first?** Check out our [Getting Started on EC2](getting-started-on-ec2) page for complete installation and AWS setup instructions.*

**Key Features:**
- 🔄 **Drop-in replacement** for standard Transformers training and inference
- ⚡ **Distributed training** support with minimal code changes  
- 🎯 **Optimized models** for AWS accelerators
- 📈 **Production-ready** inference with compiled models

## Training

Training on AWS Trainium requires minimal changes to your existing code - just swap in Optimum Neuron's drop-in replacements:

```python
import torch
import torch_xla.runtime as xr

from datasets import load_dataset
from transformers import AutoTokenizer

# Optimum Neuron's drop-in replacements for standard training components
from optimum.neuron import NeuronSFTConfig, NeuronSFTTrainer, NeuronTrainingArguments
from optimum.neuron.models.training import NeuronModelForCausalLM


def format_dolly_dataset(example):
    """Format Dolly dataset into instruction-following format."""
    instruction = f"### Instruction\n{example['instruction']}"
    context = f"### Context\n{example['context']}" if example["context"] else None
    response = f"### Answer\n{example['response']}"
    
    # Combine all parts with double newlines
    parts = [instruction, context, response]
    return "\n\n".join(part for part in parts if part)


def main():
    # Load instruction-following dataset
    dataset = load_dataset("databricks/databricks-dolly-15k", split="train")
    
    # Model configuration
    model_id = "Qwen/Qwen3-1.7B"
    output_dir = "qwen3-1.7b-finetuned"
    
    # Setup tokenizer
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token
    
    # Configure training for Trainium
    training_args = NeuronTrainingArguments(
        learning_rate=1e-4,
        tensor_parallel_size=8,  # Split model across 8 accelerators
        per_device_train_batch_size=1,  # Batch size per device
        gradient_accumulation_steps=8,
        logging_steps=1,
        output_dir=output_dir,
    )
    
    # Load model optimized for Trainium
    model = NeuronModelForCausalLM.from_pretrained(
        model_id,
        training_args.trn_config,
        torch_dtype=torch.bfloat16,
        attn_implementation="flash_attention_2",  # Enable flash attention
    )
    
    # Setup supervised fine-tuning
    sft_config = NeuronSFTConfig(
        max_seq_length=2048,
        packing=True,  # Pack multiple samples for efficiency
        **training_args.to_dict(),
    )
    
    # Initialize trainer and start training
    trainer = NeuronSFTTrainer(
        model=model,
        args=sft_config,
        tokenizer=tokenizer,
        train_dataset=dataset,
        formatting_func=format_dolly_dataset,
    )
    
    trainer.train()
    
    # Share your model with the community
    trainer.push_to_hub(
        commit_message="Fine-tuned on Databricks Dolly dataset",
        blocking=True,
        model_name=output_dir,
    )
    
    if xr.local_ordinal() == 0:
        print(f"Training complete! Model saved to {output_dir}")


if __name__ == "__main__":
    main()
```

This example demonstrates supervised fine-tuning on the [Databricks Dolly dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) using `NeuronSFTTrainer` and `NeuronModelForCausalLM` - the Trainium-optimized versions of standard Transformers components.

### Running Training

**Compilation** (optional for first run):
```bash
NEURON_CC_FLAGS="--model-type transformer" neuron_parallel_compile torchrun --nproc_per_node 32 sft_finetune_qwen3.py
```

**Training:**
```bash
NEURON_CC_FLAGS="--model-type transformer" torchrun --nproc_per_node 32 sft_finetune_qwen3.py
```

## Inference

Optimized inference requires two steps: **export** your model to Neuron format, then **run** it with `NeuronModelForXXX` classes.

### 1. Export Your Model

```bash
optimum-cli export neuron \
  --model distilbert-base-uncased-finetuned-sst-2-english \
  --batch_size 1 \
  --sequence_length 32 \
  --auto_cast matmul \
  --auto_cast_type bf16 \
  distilbert_base_uncased_finetuned_sst2_english_neuron/
```

This exports the model with optimized settings: static shapes (`batch_size=1`, `sequence_length=32`) and BF16 precision for `matmul` operations. Check out the [exporter guide](https://huggingface.co/docs/optimum-neuron/guides/export_model) for more compilation options.

### 2. Run Inference

```python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForSequenceClassification

# Load the compiled Neuron model
model = NeuronModelForSequenceClassification.from_pretrained(
    "distilbert_base_uncased_finetuned_sst2_english_neuron"
)

# Setup tokenizer (same as original model)
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")

# Run inference
inputs = tokenizer("Hamilton is considered to be the best musical of past years.", return_tensors="pt")
logits = model(**inputs).logits

print(model.config.id2label[logits.argmax().item()])
# 'POSITIVE'
```

The `NeuronModelForXXX` classes work as drop-in replacements for their `AutoModelForXXX` counterparts, making migration seamless.

## Next Steps

Ready to dive deeper? Check out our comprehensive guides:

- 📚 **[Getting Started](getting-started)** - Complete setup and installation
- 🏋️ **[Training Tutorials](training_tutorials/notebooks)** - End-to-end training examples
- 🔧 **[Export Guide](guides/export_model)** - Advanced model compilation options

### 🤗 Optimum Neuron
https://huggingface.co/docs/optimum.neuron/v0.4.0/index.md

# 🤗 Optimum Neuron

🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls).
It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks.
The list of officially validated models and tasks is available [here](https://huggingface.co/docs/optimum-neuron/package_reference/configuration#supported-architectures).

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
    <a
      class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
      href="./tutorials/fine_tune_bert"
    >
      <div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Tutorials
      </div>
      <p class="text-gray-700">
        Learn the basics and become familiar with training & deploying transformers on AWS Trainium and AWS Inferentia.
        Start here if you are using 🤗 Optimum Neuron for the first time!
      </p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guides/setup_aws_instance">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        How-to guides
      </div>
      <p class="text-gray-700">
        Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum
        Neuron to solve real-world problems.
      </p>
    </a>
    <a
      class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
      href="./package_reference/trainer"
    >
      <div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        Reference
      </div>
      <p class="text-gray-700">Technical descriptions of how the classes and methods of 🤗 Optimum Neuron work.</p>
    </a>
    <a
      class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" 
      onclick="event.preventDefault(); window.open('https://huggingface.co/datasets/sanhitsa/AWS-White-Papers-and-Blogs/resolve/main/Whitepaper-Scale-production-AI-with-Hugging-Face-Optimum-Neuron-and-AWS-Trainium-and-Inferentia.pdf', '_blank');"
    >
      <div 
        class="w-full text-center bg-gradient-to-br from-red-600 to-red-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        White Paper 
      </div>
      <p 
        class="text-gray-700">To learn more about how Optimum Neuron and AWS Inferentia and Trainium are being used by companies, read the White Paper.
      </p>
    </a>
  </div>
</div>

### Llama-3.3-70b performance on AWS Inferentia2 (Latency & Throughput)
https://huggingface.co/docs/optimum.neuron/v0.4.0/benchmarks/inferentia-llama3.3-70b.md

# Llama-3.3-70b performance on AWS Inferentia2 (Latency & Throughput)

How fast is Llama-3.3-70b on Inferentia2?  Let's figure out!

For this benchmark we will use the following configurations:

| Model type        | batch_size | sequence_length |
|-------------------|------------|-----------------|
| Llama3.3 70b BS1  | 1          | 4096            |
| Llama3.3 70b BS4  | 4          | 4096            |
| Llama3.3 70b BS8  | 8          | 4096            |

*Note: all models are compiled to use 12 devices corresponding to 24 cores on the `inf2.48xlarge` instance.*

*Note: please refer to the [inferentia2 product page](https://aws.amazon.com/ec2/instance-types/inf2/) for details on the available instances.*

## Time to first token

The time to first token is the time required to process the input tokens and generate the first output token.
It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens.

We test the time to first token for increasing context sizes, from a typical Q/A usage, to heavy Retrieval Augmented Generation (RAG) use-cases.

Time to first token is expressed in **seconds**.

![Llama3.3 70b inferentia2 TTFT](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.3-70b/ttft.png "Time to first token")

## Inter-token Latency

The inter-token latency corresponds to the average time elapsed between two generated tokens.

It is expressed in **milliseconds**.

![Llama3.3 70b inferentia2 inter-token latency](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.3-70b/latency.png "Inter-token latency")

### Throughput

Unlike some other benchmarks, we evaluate the throughput using generated tokens only, by dividing their number
by the end-to-end latency.

Throughput is expressed in **tokens/second**.

![Llama3.3 70b inferentia2 throughput](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.3-70b/throughput.png "Throughput")

### Llama-3.1-8b performance on AWS Inferentia2 (Latency & Throughput)
https://huggingface.co/docs/optimum.neuron/v0.4.0/benchmarks/inferentia-llama3.1-8b.md

# Llama-3.1-8b performance on AWS Inferentia2 (Latency & Throughput)

How fast is Llama-3.1-8b on Inferentia2?  Let's figure out!

For this benchmark we will use the following configurations:

| Model type       | batch_size | sequence_length |
|------------------|------------|-----------------|
| Llama3.1 8b BS1  | 1          | 4096            |
| Llama3.1 8b BS4  | 4          | 4096            |
| Llama3.1 8b BS8  | 8          | 4096            |
| Llama3.1 8b BS16 | 16         | 4096            |
| Llama3.1 8b BS32 | 32         | 4096            |
| Llama3.1 8b BS48 | 48         | 4096            |

*Note: all models are compiled to use 4 devices corresponding to 8 cores on the `inf2.48xlarge` instance.*

*Note: please refer to the [inferentia2 product page](https://aws.amazon.com/ec2/instance-types/inf2/) for details on the available instances.*

## Time to first token

The time to first token is the time required to process the input tokens and generate the first output token.
It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens.

We test the time to first token for increasing context sizes, from a typical Q/A usage, to heavy Retrieval Augmented Generation (RAG) use-cases.

Time to first token is expressed in **seconds**.

![Llama3.1 8b inferentia2 TTFT](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.1-8b/ttft.png "Time to first token")

## Inter-token Latency

The inter-token latency corresponds to the average time elapsed between two generated tokens.

It is expressed in **milliseconds**.

![Llama3.1 8b inferentia2 inter-token latency](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.1-8b/latency.png "Inter-token latency")

### Throughput

Unlike some other benchmarks, we evaluate the throughput using generated tokens only, by dividing their number
by the end-to-end latency.

Throughput is expressed in **tokens/second**.

![Llama3.1 8b inferentia2 throughput](https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/benchmarks/inferentia-llama3.1-8b/throughput.png "Throughput")

### Model Weight Transformation Specs
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/transformations.md

# Model Weight Transformation Specs

The transformation specs API defines how model weights are transformed between the original Transformers implementation and the custom implementation optimized for Neuron devices. This enables automatic weight conversion during model loading and checkpoint consolidation.

## Base Classes

### ModelWeightTransformationSpec[[optimum.neuron.models.training.ModelWeightTransformationSpec]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.models.training.ModelWeightTransformationSpec</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L91</source><parameters>[]</parameters></docstring>

This class defines the interface for transforming model weights between the original Transformers implementation
and the custom implementation for Neuron.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>adapt_peft_config</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.adapt_peft_config</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L121</source><parameters>[{"name": "peft_config", "val": ": PeftConfig"}, {"name": "inplace", "val": ": bool = False"}]</parameters></docstring>

Adapts the PEFT config to match the custom modeling implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>adapt_state_dict</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.adapt_state_dict</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L157</source><parameters>[{"name": "module_fully_qualified_name", "val": ": str"}, {"name": "named_parameters", "val": ": dict[str, torch.nn.parameter.Parameter]"}, {"name": "orig_state_dict", "val": ": dict[str, torch.Tensor]"}, {"name": "upstanding_sharded_params", "val": ": dict[str, torch.Tensor]"}, {"name": "inplace", "val": ": bool = False"}]</parameters></docstring>

Transforms the state dict from the original Transformers model to match the custom modeling implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_relevant_parameter_names</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.get_relevant_parameter_names</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L107</source><parameters>[{"name": "module_fully_qualified_name", "val": ": str"}]</parameters></docstring>

Returns the set of parameter names that this spec would affect.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>guess_peft_type</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.guess_peft_type</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L114</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "module_fully_qualified_name", "val": ": str"}]</parameters></docstring>

Guesses the PEFT type of the module associated to the spec.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_original_peft_config</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.to_original_peft_config</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L128</source><parameters>[{"name": "peft_config", "val": ": PeftConfig"}, {"name": "inplace", "val": ": bool = False"}]</parameters></docstring>

Restores the PEFT config to the original one that matches the original Transformers implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_original_weights</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpec.to_original_weights</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L207</source><parameters>[{"name": "module_fully_qualified_name", "val": ": str"}, {"name": "sharded_state_dicts", "val": ": dict[str, list[torch.Tensor]]"}, {"name": "parameters_metadata", "val": ": dict[str, dict[str, typing.Any]]"}]</parameters><paramsdesc>- **sharded_state_dicts** (dict[str, list[torch.Tensor]]) -- The sharded state dicts from the custom modeling
  implementation.
- **parameters_metadata** (dict[str, dict[str, Any]]) -- Metadata about the parameters in the original model.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple[dict[str, torch.Tensor], list[str]]</rettype><retdesc>A tuple containing the transformed weights and a list of the
names of the parameters to remove from the final state dict.</retdesc></docstring>

Produces the weights associated to this transformation spec from the custom model to match the original
Transformers weights.








</div></div>

### ModelWeightTransformationSpecs[[optimum.neuron.models.training.ModelWeightTransformationSpecs]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.models.training.ModelWeightTransformationSpecs</name><anchor>optimum.neuron.models.training.ModelWeightTransformationSpecs</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L239</source><parameters>[{"name": "module_fully_qualified_name", "val": ": str | None = None"}, {"name": "specs", "val": ": optimum.neuron.models.training.transformations_utils.ModelWeightTransformationSpec | list[optimum.neuron.models.training.transformations_utils.ModelWeightTransformationSpec] = <factory>"}]</parameters></docstring>

Defines a list of transformation specs for a given module of the model.


</div>

### CustomModule[[optimum.neuron.models.training.CustomModule]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.models.training.CustomModule</name><anchor>optimum.neuron.models.training.CustomModule</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L342</source><parameters>[]</parameters></docstring>

This class is used to mark a module as a custom module. It is used to identify the modules that contain weights
that need to transformed when loading and saving the model.


</div>

## Transformation Specifications

### FusedLinearsSpec[[optimum.neuron.models.training.FusedLinearsSpec]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.models.training.FusedLinearsSpec</name><anchor>optimum.neuron.models.training.FusedLinearsSpec</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L365</source><parameters>[{"name": "fused_linear_name", "val": ": str"}, {"name": "linear_names", "val": ": list[str]"}, {"name": "bias", "val": ": bool"}, {"name": "fuse_axis", "val": ": typing.Union[typing.Literal[0], typing.Literal[1], typing.Literal['column'], typing.Literal['row']]"}, {"name": "original_dims", "val": ": list[int]"}, {"name": "tp_size", "val": ": int = <factory>"}]</parameters></docstring>

Represents a transformation where multiple linear layers are fused into a single linear layer.
It can handle the case where the fused linear layer is sharded across multiple tensor parallel ranks.


</div>

### GQAQKVColumnParallelLinearSpec[[optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec</name><anchor>optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L736</source><parameters>[{"name": "gqa_qkv_projection_name", "val": ": str"}, {"name": "query_projection_name", "val": ": str"}, {"name": "key_projection_name", "val": ": str"}, {"name": "value_projection_name", "val": ": str"}, {"name": "output_projection_name", "val": ": str"}, {"name": "num_attention_heads", "val": ": int"}, {"name": "num_key_value_heads", "val": ": int"}, {"name": "kv_size_multiplier", "val": ": int"}, {"name": "q_output_size_per_partition", "val": ": int"}, {"name": "kv_output_size_per_partition", "val": ": int"}, {"name": "fuse_qkv", "val": ": bool"}, {"name": "bias", "val": ": bool"}, {"name": "tp_size", "val": ": int = <factory>"}]</parameters></docstring>

Represents the transformation of separate query, key, and value projections into a single GQAQKVColumnParalleLinear
projection.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_query_indices_for_rank</name><anchor>optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec.compute_query_indices_for_rank</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L808</source><parameters>[{"name": "tp_size", "val": ": int"}, {"name": "tp_rank", "val": ": int"}, {"name": "num_attention_heads", "val": ": int"}, {"name": "num_key_value_heads", "val": ": int"}, {"name": "kv_size_multiplier", "val": ": int"}]</parameters></docstring>

Computes the permutation for the query weight for a given TP rank.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_kv_proj_local_weight_from_regular_weight</name><anchor>optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec.create_kv_proj_local_weight_from_regular_weight</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L851</source><parameters>[{"name": "weight_data", "val": ": Tensor"}, {"name": "kv_size_multiplier", "val": ": int"}, {"name": "output_size_per_partition", "val": ": int"}]</parameters></docstring>

Creates the local version of the key or value projections weight for the given TP rank.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_query_or_output_projection_local_weight_from_regular_weight</name><anchor>optimum.neuron.models.training.GQAQKVColumnParallelLinearSpec.create_query_or_output_projection_local_weight_from_regular_weight</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L866</source><parameters>[{"name": "weight_data", "val": ": Tensor"}, {"name": "num_attention_heads", "val": ": int"}, {"name": "num_key_value_heads", "val": ": int"}, {"name": "kv_size_multiplier", "val": ": int"}, {"name": "query_or_output_proj", "val": ": typing.Union[typing.Literal['query'], typing.Literal['output']]"}]</parameters></docstring>

Creates the local version of the query or output projections weight for the given TP rank.


</div></div>

## Utility Functions

### Weight Creation Functions[[optimum.neuron.models.training.transformations_utils.create_local_weight_with_padding]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.create_local_weight_with_padding</name><anchor>optimum.neuron.models.training.transformations_utils.create_local_weight_with_padding</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L54</source><parameters>[{"name": "full_weight", "val": ": Tensor"}, {"name": "partition_dim", "val": ": int"}, {"name": "stride", "val": ": int"}, {"name": "out_weight", "val": ": torch.Tensor | None = None"}]</parameters></docstring>

Shards a tensor along a given axis and return a slice corresponding to the rank.
This will round up the layer to the next multiple if there is need to pad the tensor.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.create_local_fused_weight</name><anchor>optimum.neuron.models.training.transformations_utils.create_local_fused_weight</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L73</source><parameters>[{"name": "tp_rank", "val": ""}, {"name": "tp_size", "val": ""}, {"name": "individual_weights", "val": ""}, {"name": "partition_dim", "val": ""}, {"name": "fuse_axis", "val": ""}, {"name": "out_weight", "val": " = None"}]</parameters></docstring>

Shards individual weights across the tensor parallel ranks and fuses them into a single weight.


</div>

### Model-level Functions[[optimum.neuron.models.training.specialize_transformation_specs_for_model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.specialize_transformation_specs_for_model</name><anchor>optimum.neuron.models.training.specialize_transformation_specs_for_model</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1458</source><parameters>[{"name": "model", "val": ": Module"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.adapt_peft_config_for_model</name><anchor>optimum.neuron.models.training.adapt_peft_config_for_model</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1467</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "peft_config", "val": ": peft.config.PeftConfig | dict[str, peft.config.PeftConfig]"}, {"name": "inplace", "val": ": bool = False"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.to_original_peft_config_for_model</name><anchor>optimum.neuron.models.training.to_original_peft_config_for_model</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1484</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "inplace", "val": ": bool = False"}]</parameters></docstring>


</div>

### State Dict Functions[[optimum.neuron.models.training.adapt_state_dict]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.adapt_state_dict</name><anchor>optimum.neuron.models.training.adapt_state_dict</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1516</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "state_dict", "val": ": dict[str, torch.Tensor]"}, {"name": "upstanding_sharded_params", "val": ": dict[str, torch.Tensor]"}, {"name": "inplace", "val": ": bool = False"}, {"name": "**peft_kwargs", "val": ": Any"}]</parameters></docstring>

Transforms the state dict from the original Transformers model to match the custom modeling implementation.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.to_original_weights</name><anchor>optimum.neuron.models.training.to_original_weights</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1590</source><parameters>[{"name": "transformations_specs", "val": ": list[optimum.neuron.models.training.transformations_utils.ModelWeightTransformationSpecs]"}, {"name": "sharded_state_dicts", "val": ": dict[str, list[torch.Tensor]]"}, {"name": "parameters_metadata", "val": ": dict[str, dict[str, typing.Any]]"}, {"name": "**peft_kwargs", "val": ": Any"}]</parameters></docstring>

Consolidates the sharded state dicts produced by saving the custom model into a single state dict that matches the
original Transformers model weights.


</div>

### Metadata Functions[[optimum.neuron.models.training.create_parameter_metadata]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.create_parameter_metadata</name><anchor>optimum.neuron.models.training.create_parameter_metadata</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1659</source><parameters>[{"name": "model", "val": ""}]</parameters></docstring>

Creates the metadata to be saved with the model weights to be able to reconstruct the original weights when
consolidating the sharded state dicts.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.get_tensor_model_parallel_attributes</name><anchor>optimum.neuron.models.training.transformations_utils.get_tensor_model_parallel_attributes</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1645</source><parameters>[{"name": "tensor", "val": ": Tensor"}]</parameters></docstring>

Returns the tensor model parallel attributes of a tensor.


</div>

### Helper Functions[[optimum.neuron.models.training.transformations_utils.remove_adapter_name]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.remove_adapter_name</name><anchor>optimum.neuron.models.training.transformations_utils.remove_adapter_name</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1501</source><parameters>[{"name": "name", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.is_base_layer</name><anchor>optimum.neuron.models.training.transformations_utils.is_base_layer</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1505</source><parameters>[{"name": "name", "val": ": str"}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.models.training.transformations_utils.get_adapter_name</name><anchor>optimum.neuron.models.training.transformations_utils.get_adapter_name</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/training/transformations_utils.py#L1509</source><parameters>[{"name": "parameter_fully_qualified_name", "val": ": str"}]</parameters></docstring>


</div>

### LoRA for Neuron
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/lora.md

# LoRA for Neuron

LoRA (Low-Rank Adaptation) implementation optimized for distributed training on AWS Trainium devices. This module provides efficient parameter-efficient fine-tuning with tensor parallelism and sequence parallelism support.

## PEFT Model Classes

### NeuronPeftModel[[optimum.neuron.peft.NeuronPeftModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.NeuronPeftModel</name><anchor>optimum.neuron.peft.NeuronPeftModel</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/peft_model.py#L82</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>


</div>

### NeuronPeftModelForCausalLM[[optimum.neuron.peft.NeuronPeftModelForCausalLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.NeuronPeftModelForCausalLM</name><anchor>optimum.neuron.peft.NeuronPeftModelForCausalLM</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/peft_model.py#L463</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>


</div>

## LoRA Layer Implementations

### Base LoRA Layer[[optimum.neuron.peft.tuners.lora.layer.NeuronLoraLayer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.tuners.lora.layer.NeuronLoraLayer</name><anchor>optimum.neuron.peft.tuners.lora.layer.NeuronLoraLayer</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/tuners/lora/layer.py#L73</source><parameters>[{"name": "base_layer", "val": ": Module"}, {"name": "ephemeral_gpu_offload", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### Parallel Linear LoRA[[optimum.neuron.peft.tuners.lora.layer.ParallelLinear]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.tuners.lora.layer.ParallelLinear</name><anchor>optimum.neuron.peft.tuners.lora.layer.ParallelLinear</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/tuners/lora/layer.py#L224</source><parameters>[{"name": "base_layer", "val": ""}, {"name": "adapter_name", "val": ": str"}, {"name": "r", "val": ": int = 0"}, {"name": "lora_alpha", "val": ": int = 1"}, {"name": "lora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "is_target_conv_1d_layer", "val": ": bool = False"}, {"name": "init_lora_weights", "val": ": bool | str = True"}, {"name": "use_rslora", "val": ": bool = False"}, {"name": "use_dora", "val": ": bool = False"}, {"name": "lora_bias", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### GQA QKV Column Parallel LoRA[[optimum.neuron.peft.tuners.lora.layer.GQAQKVColumnParallelLinear]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.tuners.lora.layer.GQAQKVColumnParallelLinear</name><anchor>optimum.neuron.peft.tuners.lora.layer.GQAQKVColumnParallelLinear</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/tuners/lora/layer.py#L315</source><parameters>[{"name": "base_layer", "val": ""}, {"name": "adapter_name", "val": ": str"}, {"name": "r", "val": ": int = 0"}, {"name": "lora_alpha", "val": ": int = 1"}, {"name": "lora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "is_target_conv_1d_layer", "val": ": bool = False"}, {"name": "init_lora_weights", "val": ": bool | str = True"}, {"name": "use_rslora", "val": ": bool = False"}, {"name": "use_dora", "val": ": bool = False"}, {"name": "lora_bias", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### Parallel Embedding LoRA[[optimum.neuron.peft.tuners.lora.layer.ParallelEmbedding]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.tuners.lora.layer.ParallelEmbedding</name><anchor>optimum.neuron.peft.tuners.lora.layer.ParallelEmbedding</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/tuners/lora/layer.py#L488</source><parameters>[{"name": "base_layer", "val": ": Module"}, {"name": "adapter_name", "val": ": str"}, {"name": "r", "val": ": int = 0"}, {"name": "lora_alpha", "val": ": int = 1"}, {"name": "lora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "init_lora_weights", "val": ": bool | str = True"}, {"name": "use_rslora", "val": ": bool = False"}, {"name": "use_dora", "val": ": bool = False"}, {"name": "lora_bias", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## LoRA Model

### NeuronLoraModel[[optimum.neuron.peft.tuners.NeuronLoraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.peft.tuners.NeuronLoraModel</name><anchor>optimum.neuron.peft.tuners.NeuronLoraModel</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/tuners/lora/model.py#L29</source><parameters>[{"name": "model", "val": ""}, {"name": "config", "val": ""}, {"name": "adapter_name", "val": ""}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters></docstring>


</div>

## Utility Functions

### get_peft_model[[optimum.neuron.peft.get_peft_model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.neuron.peft.get_peft_model</name><anchor>optimum.neuron.peft.get_peft_model</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/peft/mapping_func.py#L43</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "mixed", "val": ": bool = False"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "revision", "val": ": str | None = None"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters></docstring>


</div>

## Architecture Support

The Neuron LoRA implementation supports the following parallel layer types:

- **ColumnParallelLinear**: For layers that split weights along the output dimension
- **RowParallelLinear**: For layers that split weights along the input dimension  
- **ParallelEmbedding**: For embedding layers distributed across ranks
- **GQAQKVColumnParallelLinear**: For grouped query attention projections with challenging tensor parallel configurations

Each layer type has a corresponding LoRA implementation that maintains the parallelization strategy while adding low-rank adaptation capabilities.

## Key Features

- **Distributed Training**: Full support for tensor parallelism and sequence parallelism
- **Checkpoint Consolidation**: Automatic conversion between sharded and consolidated checkpoints
- **Weight Transformation**: Seamless integration with model weight transformation specs
- **Compatibility**: Works with all supported custom modeling architectures in Optimum Neuron

### Neuron TRL Trainers
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/trl_trainers.md

# Neuron TRL Trainers

[TRL](https://huggingface.co/docs/trl/en/index)-compatible trainers for AWS Trainium accelerators.

## NeuronSFTTrainer

### NeuronSFTConfig[[optimum.neuron.NeuronSFTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronSFTConfig</name><anchor>optimum.neuron.NeuronSFTConfig</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/sft_config.py#L34</source><parameters>[{"name": "output_dir", "val": ": str | None = None"}, {"name": "overwrite_output_dir", "val": ": bool = False"}, {"name": "do_train", "val": ": bool = False"}, {"name": "do_eval", "val": ": bool = False"}, {"name": "eval_strategy", "val": ": transformers.trainer_utils.IntervalStrategy | str = 'no'"}, {"name": "per_device_train_batch_size", "val": ": int = 1"}, {"name": "per_device_eval_batch_size", "val": ": int = 1"}, {"name": "gradient_accumulation_steps", "val": ": int = 1"}, {"name": "learning_rate", "val": ": float = 5e-05"}, {"name": "weight_decay", "val": ": float = 0.0"}, {"name": "adam_beta1", "val": ": float = 0.9"}, {"name": "adam_beta2", "val": ": float = 0.999"}, {"name": "adam_epsilon", "val": ": float = 1e-08"}, {"name": "max_grad_norm", "val": ": float = 1.0"}, {"name": "num_train_epochs", "val": ": float = 3.0"}, {"name": "max_steps", "val": ": int = -1"}, {"name": "lr_scheduler_type", "val": ": transformers.trainer_utils.SchedulerType | str = 'linear'"}, {"name": "lr_scheduler_kwargs", "val": ": dict[str, typing.Any] | str | None = <factory>"}, {"name": "warmup_ratio", "val": ": float = 0.0"}, {"name": "warmup_steps", "val": ": int = 0"}, {"name": "log_level", "val": ": str = 'info'"}, {"name": "log_level_replica", "val": ": str = 'silent'"}, {"name": "logging_dir", "val": ": str | None = None"}, {"name": "logging_strategy", "val": ": transformers.trainer_utils.IntervalStrategy | str = 'steps'"}, {"name": "logging_first_step", "val": ": bool = False"}, {"name": "logging_steps", "val": ": float = 500"}, {"name": "save_strategy", "val": ": transformers.trainer_utils.SaveStrategy | str = 'steps'"}, {"name": "save_steps", "val": ": float = 500"}, {"name": "save_total_limit", "val": ": int | None = None"}, {"name": "save_only_model", "val": ": bool = False"}, {"name": "restore_callback_states_from_checkpoint", "val": ": bool = False"}, {"name": "seed", "val": ": int = 42"}, {"name": "bf16", "val": ": bool = False"}, {"name": "dataloader_drop_last", "val": ": bool = False"}, {"name": "eval_steps", "val": ": float | None = None"}, {"name": "dataloader_num_workers", "val": ": int = 0"}, {"name": "dataloader_prefetch_factor", "val": ": int | None = None"}, {"name": "run_name", "val": ": str | None = None"}, {"name": "disable_tqdm", "val": ": bool | None = None"}, {"name": "remove_unused_columns", "val": ": bool | None = True"}, {"name": "label_names", "val": ": list[str] | None = None"}, {"name": "accelerator_config", "val": ": dict | str | None = None"}, {"name": "label_smoothing_factor", "val": ": float = 0.0"}, {"name": "optim", "val": ": transformers.training_args.OptimizerNames | str = 'adamw_torch'"}, {"name": "optim_args", "val": ": str | None = None"}, {"name": "report_to", "val": ": None | str | list[str] = None"}, {"name": "resume_from_checkpoint", "val": ": str | None = None"}, {"name": "gradient_checkpointing", "val": ": bool = False"}, {"name": "gradient_checkpointing_kwargs", "val": ": dict[str, typing.Any] | str | None = None"}, {"name": "use_liger_kernel", "val": ": bool | None = False"}, {"name": "average_tokens_across_devices", "val": ": bool | None = False"}, {"name": "dataloader_prefetch_size", "val": ": int = None"}, {"name": "skip_cache_push", "val": ": bool = False"}, {"name": "use_autocast", "val": ": bool = False"}, {"name": "zero_1", "val": ": bool = True"}, {"name": "stochastic_rounding_enabled", "val": ": bool = True"}, {"name": "optimizer_save_master_weights_in_ckpt", "val": ": bool = False"}, {"name": "tensor_parallel_size", "val": ": int = 1"}, {"name": "disable_sequence_parallel", "val": ": bool = False"}, {"name": "pipeline_parallel_size", "val": ": int = 1"}, {"name": "pipeline_parallel_num_microbatches", "val": ": int = -1"}, {"name": "kv_size_multiplier", "val": ": int | None = None"}, {"name": "num_local_ranks_per_step", "val": ": int = 8"}, {"name": "use_xser", "val": ": bool = True"}, {"name": "async_save", "val": ": bool = False"}, {"name": "fuse_qkv", "val": ": bool = False"}, {"name": "recompute_causal_mask", "val": ": bool = True"}]</parameters></docstring>


</div>

### NeuronSFTTrainer[[optimum.neuron.NeuronSFTTrainer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronSFTTrainer</name><anchor>optimum.neuron.NeuronSFTTrainer</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/sft_trainer.py#L76</source><parameters>[{"name": "model", "val": ": transformers.modeling_utils.PreTrainedModel | torch.nn.modules.module.Module | str"}, {"name": "args", "val": ": optimum.neuron.trainers.sft_trainer.SFTConfig | None = None"}, {"name": "data_collator", "val": ": typing.Optional[transformers.data.data_collator.DataCollator] = None"}, {"name": "train_dataset", "val": ": Dataset | IterableDataset | datasets.Dataset | None = None"}, {"name": "eval_dataset", "val": ": Dataset | dict[str, Dataset] | datasets.Dataset | None = None"}, {"name": "processsing_class", "val": ": transformers.tokenization_utils_base.PreTrainedTokenizerBase | transformers.processing_utils.ProcessorMixin | None = None"}, {"name": "callbacks", "val": ": list[transformers.trainer_callback.TrainerCallback] | None = None"}, {"name": "optimizers", "val": ": tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None)"}, {"name": "optimizer_cls_and_kwargs", "val": ": tuple[type[torch.optim.optimizer.Optimizer], dict[str, typing.Any]] | None = None"}, {"name": "tokenizer", "val": ": transformers.tokenization_utils_base.PreTrainedTokenizerBase | None = None"}, {"name": "peft_config", "val": ": peft.config.PeftConfig | None = None"}, {"name": "formatting_func", "val": ": typing.Optional[typing.Callable] = None"}]</parameters></docstring>

`SFTTrainer` adapted for Neuron.

It differs from the original `SFTTrainer` by:
- Using `_TrainerForNeuron.__init__()` instead of `Trainer.__init__()`
- Using the `_TrainerForNeuron.train()` instead of `Trainer.train()`
- Adapts the `_prepare_non_packed_dataloader` to pad to max length. In the original `SFTTrainer` examples are
  not padded, which is an issue here because it triggers compilation every time.


</div>

### NeuronTrainer
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_api/trainer.md

# NeuronTrainer

Training classes for AWS Trainium accelerators.

## NeuronTrainingArguments[[optimum.neuron.NeuronTrainingArguments]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronTrainingArguments</name><anchor>optimum.neuron.NeuronTrainingArguments</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L51</source><parameters>[{"name": "output_dir", "val": ": str | None = None"}, {"name": "overwrite_output_dir", "val": ": bool = False"}, {"name": "do_train", "val": ": bool = False"}, {"name": "do_eval", "val": ": bool = False"}, {"name": "eval_strategy", "val": ": transformers.trainer_utils.IntervalStrategy | str = 'no'"}, {"name": "per_device_train_batch_size", "val": ": int = 1"}, {"name": "per_device_eval_batch_size", "val": ": int = 1"}, {"name": "gradient_accumulation_steps", "val": ": int = 1"}, {"name": "learning_rate", "val": ": float = 5e-05"}, {"name": "weight_decay", "val": ": float = 0.0"}, {"name": "adam_beta1", "val": ": float = 0.9"}, {"name": "adam_beta2", "val": ": float = 0.999"}, {"name": "adam_epsilon", "val": ": float = 1e-08"}, {"name": "max_grad_norm", "val": ": float = 1.0"}, {"name": "num_train_epochs", "val": ": float = 3.0"}, {"name": "max_steps", "val": ": int = -1"}, {"name": "lr_scheduler_type", "val": ": transformers.trainer_utils.SchedulerType | str = 'linear'"}, {"name": "lr_scheduler_kwargs", "val": ": dict[str, typing.Any] | str | None = <factory>"}, {"name": "warmup_ratio", "val": ": float = 0.0"}, {"name": "warmup_steps", "val": ": int = 0"}, {"name": "log_level", "val": ": str = 'info'"}, {"name": "log_level_replica", "val": ": str = 'silent'"}, {"name": "logging_dir", "val": ": str | None = None"}, {"name": "logging_strategy", "val": ": transformers.trainer_utils.IntervalStrategy | str = 'steps'"}, {"name": "logging_first_step", "val": ": bool = False"}, {"name": "logging_steps", "val": ": float = 500"}, {"name": "save_strategy", "val": ": transformers.trainer_utils.SaveStrategy | str = 'steps'"}, {"name": "save_steps", "val": ": float = 500"}, {"name": "save_total_limit", "val": ": int | None = None"}, {"name": "save_only_model", "val": ": bool = False"}, {"name": "restore_callback_states_from_checkpoint", "val": ": bool = False"}, {"name": "seed", "val": ": int = 42"}, {"name": "bf16", "val": ": bool = False"}, {"name": "dataloader_drop_last", "val": ": bool = False"}, {"name": "eval_steps", "val": ": float | None = None"}, {"name": "dataloader_num_workers", "val": ": int = 0"}, {"name": "dataloader_prefetch_factor", "val": ": int | None = None"}, {"name": "run_name", "val": ": str | None = None"}, {"name": "disable_tqdm", "val": ": bool | None = None"}, {"name": "remove_unused_columns", "val": ": bool | None = True"}, {"name": "label_names", "val": ": list[str] | None = None"}, {"name": "accelerator_config", "val": ": dict | str | None = None"}, {"name": "label_smoothing_factor", "val": ": float = 0.0"}, {"name": "optim", "val": ": transformers.training_args.OptimizerNames | str = 'adamw_torch'"}, {"name": "optim_args", "val": ": str | None = None"}, {"name": "report_to", "val": ": None | str | list[str] = None"}, {"name": "resume_from_checkpoint", "val": ": str | None = None"}, {"name": "gradient_checkpointing", "val": ": bool = False"}, {"name": "gradient_checkpointing_kwargs", "val": ": dict[str, typing.Any] | str | None = None"}, {"name": "use_liger_kernel", "val": ": bool | None = False"}, {"name": "average_tokens_across_devices", "val": ": bool | None = False"}, {"name": "dataloader_prefetch_size", "val": ": int = None"}, {"name": "skip_cache_push", "val": ": bool = False"}, {"name": "use_autocast", "val": ": bool = False"}, {"name": "zero_1", "val": ": bool = True"}, {"name": "stochastic_rounding_enabled", "val": ": bool = True"}, {"name": "optimizer_save_master_weights_in_ckpt", "val": ": bool = False"}, {"name": "tensor_parallel_size", "val": ": int = 1"}, {"name": "disable_sequence_parallel", "val": ": bool = False"}, {"name": "pipeline_parallel_size", "val": ": int = 1"}, {"name": "pipeline_parallel_num_microbatches", "val": ": int = -1"}, {"name": "kv_size_multiplier", "val": ": int | None = None"}, {"name": "num_local_ranks_per_step", "val": ": int = 8"}, {"name": "use_xser", "val": ": bool = True"}, {"name": "async_save", "val": ": bool = False"}, {"name": "fuse_qkv", "val": ": bool = False"}, {"name": "recompute_causal_mask", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_process_log_level</name><anchor>optimum.neuron.NeuronTrainingArguments.get_process_log_level</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L702</source><parameters>[]</parameters></docstring>

Returns the log level to be used depending on whether this process is the main process of node 0, main process
of node non-0, or a non-main process.

For the main process the log level defaults to the logging level set (`logging.WARNING` if you didn't do
anything) unless overridden by `log_level` argument.

For the replica processes the log level defaults to `logging.WARNING` unless overridden by `log_level_replica`
argument.

The choice between the main and replica process settings is made according to the return value of `should_log`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_warmup_steps</name><anchor>optimum.neuron.NeuronTrainingArguments.get_warmup_steps</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L724</source><parameters>[{"name": "num_training_steps", "val": ": int"}]</parameters></docstring>

Get number of steps used for a linear warmup.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>optimum.neuron.NeuronTrainingArguments.to_dict</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L744</source><parameters>[]</parameters></docstring>

Serializes this instance while replace `Enum` by their values (for JSON serialization support). It obfuscates
the token values by removing their value.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_json_string</name><anchor>optimum.neuron.NeuronTrainingArguments.to_json_string</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L771</source><parameters>[]</parameters></docstring>

Serializes this instance to a JSON string.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_sanitized_dict</name><anchor>optimum.neuron.NeuronTrainingArguments.to_sanitized_dict</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/training_args.py#L777</source><parameters>[]</parameters></docstring>

Sanitized serialization to use with TensorBoard’s hparams


</div></div>

## NeuronTrainer[[optimum.neuron.NeuronTrainer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronTrainer</name><anchor>optimum.neuron.NeuronTrainer</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L114</source><parameters>[{"name": "model", "val": ": transformers.modeling_utils.PreTrainedModel | torch.nn.modules.module.Module"}, {"name": "args", "val": ": NeuronTrainingArguments"}, {"name": "data_collator", "val": ": typing.Optional[transformers.data.data_collator.DataCollator] = None"}, {"name": "train_dataset", "val": ": Dataset | IterableDataset | datasets.Dataset | None = None"}, {"name": "eval_dataset", "val": ": Dataset | dict[str, Dataset] | datasets.Dataset | None = None"}, {"name": "processing_class", "val": ": transformers.tokenization_utils_base.PreTrainedTokenizerBase | transformers.image_processing_utils.BaseImageProcessor | transformers.feature_extraction_utils.FeatureExtractionMixin | transformers.processing_utils.ProcessorMixin | None = None"}, {"name": "callbacks", "val": ": list[transformers.trainer_callback.TrainerCallback] | None = None"}, {"name": "optimizers", "val": ": tuple[torch.optim.optimizer.Optimizer | None, torch.optim.lr_scheduler.LambdaLR | None] = (None, None)"}, {"name": "optimizer_cls_and_kwargs", "val": ": tuple[type[torch.optim.optimizer.Optimizer], dict[str, typing.Any]] | None = None"}, {"name": "tokenizer", "val": ": transformers.tokenization_utils_base.PreTrainedTokenizerBase | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_callback</name><anchor>optimum.neuron.NeuronTrainer.add_callback</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L367</source><parameters>[{"name": "callback", "val": ": typing.Union[typing.Type[transformers.trainer_callback.TrainerCallback], transformers.trainer_callback.TrainerCallback]"}]</parameters><paramsdesc>- **callback** (`Type[TrainerCallback] | TrainerCallback`) --
  A `TrainerCallback` class or an instance of a `TrainerCallback`. In the
  first case, will instantiate a member of that class.</paramsdesc><paramgroups>0</paramgroups></docstring>

Add a callback to the current list of `TrainerCallback`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>autocast_smart_context_manager</name><anchor>optimum.neuron.NeuronTrainer.autocast_smart_context_manager</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L727</source><parameters>[{"name": "cache_enabled", "val": ": bool | None = True"}]</parameters></docstring>

A helper wrapper that creates an appropriate context manager for `autocast` while feeding it the desired
arguments, depending on the situation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_accelerator_and_postprocess</name><anchor>optimum.neuron.NeuronTrainer.create_accelerator_and_postprocess</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L283</source><parameters>[]</parameters></docstring>
Creates NeuronAccelerator instance and prepares model for distributed training.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_optimizer</name><anchor>optimum.neuron.NeuronTrainer.create_optimizer</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L561</source><parameters>[]</parameters></docstring>

Setup the optimizer.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
NeuronTrainer's init through `optimizers`, or subclass and override this method in a subclass.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_optimizer_and_scheduler</name><anchor>optimum.neuron.NeuronTrainer.create_optimizer_and_scheduler</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L539</source><parameters>[{"name": "num_training_steps", "val": ": int"}]</parameters></docstring>

Setup the optimizer and the learning rate scheduler.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
NeuronTrainer's init through `optimizers`, or subclass and override this method (or `create_optimizer` and/or
`create_scheduler`) in a subclass.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_scheduler</name><anchor>optimum.neuron.NeuronTrainer.create_scheduler</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L675</source><parameters>[{"name": "num_training_steps", "val": ": int"}, {"name": "optimizer", "val": ": torch.optim.optimizer.Optimizer | None = None"}]</parameters><paramsdesc>- **num_training_steps** (int) -- The number of training steps to do.</paramsdesc><paramgroups>0</paramgroups></docstring>

Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or
passed as an argument.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_decay_parameter_names</name><anchor>optimum.neuron.NeuronTrainer.get_decay_parameter_names</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L550</source><parameters>[{"name": "model", "val": ""}]</parameters></docstring>

Get all parameter names that weight decay will be applied to.

This function filters out parameters in two ways:
1. By layer type (instances of layers specified in ALL_LAYERNORM_LAYERS)
2. By parameter name patterns (containing 'bias', 'layernorm', or 'rmsnorm')


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_learning_rates</name><anchor>optimum.neuron.NeuronTrainer.get_learning_rates</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L618</source><parameters>[]</parameters></docstring>

Returns the learning rate of each parameter from self.optimizer.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_num_trainable_parameters</name><anchor>optimum.neuron.NeuronTrainer.get_num_trainable_parameters</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L612</source><parameters>[]</parameters></docstring>

Get the number of trainable parameters.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_optimizer_cls_and_kwargs</name><anchor>optimum.neuron.NeuronTrainer.get_optimizer_cls_and_kwargs</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L642</source><parameters>[{"name": "args", "val": ": TrainingArguments"}, {"name": "model", "val": ": transformers.modeling_utils.PreTrainedModel | None = None"}]</parameters><paramsdesc>- **args** (`transformers.training_args.TrainingArguments`) --
  The training arguments for the training session.</paramsdesc><paramgroups>0</paramgroups></docstring>

Returns the optimizer class and optimizer parameters based on the training arguments.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_optimizer_group</name><anchor>optimum.neuron.NeuronTrainer.get_optimizer_group</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L626</source><parameters>[{"name": "param", "val": ": str | torch.nn.parameter.Parameter | None = None"}]</parameters><paramsdesc>- **param** (`str | torch.nn.parameter.Parameter | None`, defaults to `None`) --
  The parameter for which optimizer group needs to be returned.</paramsdesc><paramgroups>0</paramgroups></docstring>

Returns optimizer group for a parameter if given, else returns all optimizer groups for params.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_train_dataloader</name><anchor>optimum.neuron.NeuronTrainer.get_train_dataloader</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L520</source><parameters>[]</parameters></docstring>
Returns the training DataLoader with appropriate sampler and batch size.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_local_process_zero</name><anchor>optimum.neuron.NeuronTrainer.is_local_process_zero</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L1166</source><parameters>[]</parameters></docstring>

Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several
machines) main process.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_world_process_zero</name><anchor>optimum.neuron.NeuronTrainer.is_world_process_zero</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L1173</source><parameters>[]</parameters></docstring>

Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be `True` for one process).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>log</name><anchor>optimum.neuron.NeuronTrainer.log</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L1233</source><parameters>[{"name": "logs", "val": ": dict[str, float]"}]</parameters></docstring>
Log training metrics to the state history and callbacks.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>maybe_log_train_step_metrics</name><anchor>optimum.neuron.NeuronTrainer.maybe_log_train_step_metrics</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L989</source><parameters>[]</parameters></docstring>
Log training step metrics if logging is due.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>maybe_save_checkpoint</name><anchor>optimum.neuron.NeuronTrainer.maybe_save_checkpoint</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L1025</source><parameters>[]</parameters></docstring>
Save checkpoint if saving is due.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>num_examples</name><anchor>optimum.neuron.NeuronTrainer.num_examples</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L695</source><parameters>[{"name": "dataloader", "val": ": DataLoader"}]</parameters></docstring>

Helper to get number of samples in a `~torch.utils.data.DataLoader` by accessing its dataset. When
dataloader.dataset does not exist or has no length, estimates as best it can


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>num_tokens</name><anchor>optimum.neuron.NeuronTrainer.num_tokens</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L709</source><parameters>[{"name": "train_dl", "val": ": DataLoader"}, {"name": "max_steps", "val": ": int | None = None"}]</parameters></docstring>

Helper to get number of tokens in a `~torch.utils.data.DataLoader` by enumerating dataloader.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pop_callback</name><anchor>optimum.neuron.NeuronTrainer.pop_callback</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L378</source><parameters>[{"name": "callback", "val": ": typing.Union[typing.Type[transformers.trainer_callback.TrainerCallback], transformers.trainer_callback.TrainerCallback]"}]</parameters><paramsdesc>- **callback** (`Type[TrainerCallback] | TrainerCallback`) --
  A `TrainerCallback` class or an instance of a `TrainerCallback`. In the
  first case, will pop the first member of that class found in the list of callbacks.</paramsdesc><paramgroups>0</paramgroups><rettype>`TrainerCallback | None`</rettype><retdesc>The callback removed, if found.</retdesc></docstring>

Remove a callback from the current list of `TrainerCallback` and returns it.

If the callback is not found, returns `None` (and no error is raised).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>remove_callback</name><anchor>optimum.neuron.NeuronTrainer.remove_callback</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L394</source><parameters>[{"name": "callback", "val": ": typing.Union[typing.Type[transformers.trainer_callback.TrainerCallback], transformers.trainer_callback.TrainerCallback]"}]</parameters><paramsdesc>- **callback** (`Type[TrainerCallback] | TrainerCallback`) --
  A `TrainerCallback` class or an instance of a `TrainerCallback`. In the
  first case, will remove the first member of that class found in the list of callbacks.</paramsdesc><paramgroups>0</paramgroups></docstring>

Remove a callback from the current list of `TrainerCallback`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_initial_training_values</name><anchor>optimum.neuron.NeuronTrainer.set_initial_training_values</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L738</source><parameters>[{"name": "args", "val": ": NeuronTrainingArguments"}, {"name": "dataloader", "val": ": DataLoader"}, {"name": "total_train_batch_size", "val": ": int"}]</parameters></docstring>

Calculates and returns the following values:
- `num_train_epochs`
- `num_update_steps_per_epoch`
- `num_examples`
- `num_train_samples`
- `epoch_based`
- `len_dataloader`
- `max_steps`


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>setup_training</name><anchor>optimum.neuron.NeuronTrainer.setup_training</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/trainers/transformers.py#L802</source><parameters>[{"name": "train_dataloader", "val": ": DataLoader"}, {"name": "max_steps", "val": ": int"}, {"name": "num_train_epochs", "val": ": int"}, {"name": "num_examples", "val": ": int"}, {"name": "total_train_batch_size", "val": ": int"}]</parameters></docstring>

Setup everything to prepare for the training loop.
This methods does not return anything but initializes many attributes of the class for training.


</div></div>

### Setting up your development environment
https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/dev_environment.md

# Setting up your development environment

You have decided to contribute to `optimum-neuron`: at this stage you should have an up-to-date copy of the `optimum-neuron`
 repository installed locally (either a clone of the [original repository](https://github.com/huggingface/optimum-neuron)
if you have write access or a clone from your own fork if you are an external contributor).

Before contributing and submit your first pull-request, you need to prepare your development environment by installing a few development tools.

## Prepare a python virtual environment

> **_NOTE:_** ❗If you are using the Hugging Face Deep Learning AMI, you can reuse the virtual environment that is automatically activated when logging to the machine and skip this step.

```shell
$ python3 -m venv .venv
$ source .venv/bin/activate
```

Note: `optimum-neuron` requires at least python 3.10

## Install development tools

First, you need to install the tools that are used to check that your contribution complies
with the few `optimum-neuron` styling and coding rules.

```shell
$ pip install .[quality]
$ pre-commit install
```

Then, and only if you plan to modify the `optimum-neuron` code itself, you need to
install the test environment.

```shell
$ pip install .[tests]
```

## Creating a development branch

You cannot contribute your changes directly to the `optimum-neuron` `main` branch, so you need
first to create a development branch containing your changes:

```shell
$ git checkout main
$ git pull
$ git checkout -b <my-awesome-contribution>
```

## Committing your changes

All contributions are reviewed by the `optimum-neuron` maintainers: in order to speed up the review,
you are **strongly** encouraged to submit your changes in small,
[atomic](https://dev.to/samuelfaure/how-atomic-git-commits-dramatically-increased-my-productivity-and-will-increase-yours-too-4a84) changes.

`optimum-neuron` has a few styling and coding policies that are enforced by the quality tools you previously installed.

You can however apply the styling tools manually before committing using explicit commands:

For any contribution:

```shell
$ pre-commit run end-of-file-fixer
$ pre-commit run trailing-whitespace

For python code:

```shell
$ pre-commit run ruff-check
$ pre-commit run ruff-format
```

## Submitting your pull-request

We have prepared a pull-request template with a few instructions that you should read carefully before submitting.

Thank you for contributing to `optimum-neuron` !

### Contributing Custom Models for Training
https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/contribute_for_training.md

# Contributing Custom Models for Training

This guide explains how to add custom model implementations to the `optimum/neuron/models/training/` directory. Custom models are needed to support distributed training features like tensor parallelism, pipeline parallelism, and sequence parallelism on AWS Trainium devices.

## Architecture Components

### 1. NeuronModelMixin

The `NeuronModelMixin` class provides core functionality:
- `from_pretrained()`: Loads regular Transformers weights into custom implementations
- `save_pretrained()`: Saves sharded checkpoints with consolidation metadata
- Pipeline parallelism support through `PIPELINE_*` attributes

### 2. Weight Transformation Specs

Transformation specs handle converting weights between:
- Original Transformers format → Custom parallel format (during loading)
- Custom parallel format → Original Transformers format (during checkpoint consolidation)

Key transformation spec types:
- `FusedLinearsSpec`: Handles fused linear layers (e.g., `gate_up_proj`)
- `GQAQKVColumnParallelLinearSpec`: Handles grouped query attention projections when the tensor parallel size is greater than the number of key-value heads

For complete API documentation of all transformation specs and utility functions, see the [Model Weight Transformation Specs API Reference](../training_api/transformations).

### 3. Parallel Layers

Use these parallel layers from `neuronx_distributed`:
- `ColumnParallelLinear`: Splits weight matrix along output dimension
- `RowParallelLinear`: Splits weight matrix along input dimension
- `ParallelEmbedding`: Splits embedding table across ranks
- `GQAQKVColumnParallelLinear`: Specialized for grouped query attention projections when the tensor parallel size is greater than the number of key-value heads

## Implementation Steps

### Step 1: Create Model Structure

Create a new directory: `optimum/neuron/models/training/your_model/`

**`__init__.py`**
```python
from .modeling_your_model import YourModelForCausalLM, YourModel

__all__ = ["YourModelForCausalLM", "YourModel"]
```

### Step 2: Implement the Model Building Blocks

**`modeling_your_model.py`**

#### Imports and Dependencies

```python
import torch
from torch import nn
from neuronx_distributed.parallel_layers.layers import (
    ColumnParallelLinear,
    RowParallelLinear,
    ParallelEmbedding,
)
from neuronx_distributed.modules.qkv_linear import GQAQKVColumnParallelLinear
from transformers import PreTrainedModel
from transformers.models.your_model import YourModelConfig

from ..config import TrainingNeuronConfig
from ..modeling_utils import NeuronModelMixin
from ..transformations_utils import (
    CustomModule,
    FusedLinearsSpec,
    GQAQKVColumnParallelLinearSpec,
    ModelWeightTransformationSpecs,
)
```

#### Embedding Layer
```python
class YourModelEmbeddings(nn.Module):
    def __init__(self, config, trn_config):
        super().__init__()
        self.embed_tokens = ParallelEmbedding(
            config.vocab_size,
            config.hidden_size,
            dtype=config.torch_dtype,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
        )
```


#### MLP Layer with Fused Linears

**Important**: Any module that has transformation specs must inherit from `CustomModule` to ensure proper handling of weight transformations, and the transformation specs must be defined in the `self.specs` attribute.

```python
class YourModelMLP(nn.Module, CustomModule):
    def __init__(self, config, trn_config):
        super().__init__()
        self.hidden_size = config.hidden_size
        self.intermediate_size = config.intermediate_size
        
        # Fused gate and up projections
        self.gate_up_proj = ColumnParallelLinear(
            self.hidden_size,
            2 * self.intermediate_size,
            stride=2,  # Important for proper sharding
            bias=False,
            gather_output=False,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        
        self.down_proj = RowParallelLinear(
            self.intermediate_size,
            self.hidden_size,
            bias=False,
            input_is_parallel=True,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        
        # Define transformation specs
        self.specs = ModelWeightTransformationSpecs()
        self.specs.add_spec(
            FusedLinearsSpec(
                fused_linear_name="gate_up_proj",
                linear_names=["gate_proj", "up_proj"],
                bias=False,
                fuse_axis="column",  # Fuse along output dimension
                original_dims=[self.intermediate_size, self.intermediate_size],
            )
        )
```

#### Attention Layer

The attention layer implementation depends on the model's architecture and tensor parallel configuration. There are three main variants:

**1. Separate Q, K, V Projections (Default)**
```python
class YourModelAttention(nn.Module, CustomModule):
    def __init__(self, config, trn_config, layer_idx):
        super().__init__()
        self.config = config
        self.num_heads = config.num_attention_heads
        self.num_key_value_heads = config.num_key_value_heads
        self.head_dim = config.hidden_size // self.num_heads
        
        # Separate projections for Q, K, V
        self.q_proj = ColumnParallelLinear(
            config.hidden_size,
            self.num_heads * self.head_dim,
            bias=False,
            gather_output=False,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        self.k_proj = ColumnParallelLinear(
            config.hidden_size,
            self.num_key_value_heads * self.head_dim,
            bias=False,
            gather_output=False,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        self.v_proj = ColumnParallelLinear(
            config.hidden_size,
            self.num_key_value_heads * self.head_dim,
            bias=False,
            gather_output=False,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        
        self.o_proj = RowParallelLinear(
            self.num_heads * self.head_dim,
            config.hidden_size,
            bias=False,
            input_is_parallel=True,
            sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
            dtype=config.torch_dtype,
        )
        
        # No transformation specs needed - regular parallel layers
        self.specs = ModelWeightTransformationSpecs()
```

**2. Fused QKV Projection (Multi-Head Attention)**
```python
class YourModelAttention(nn.Module, CustomModule):
    def __init__(self, config, trn_config, layer_idx):
        super().__init__()
        # ... (same setup as above)
        
        tp_size = get_tensor_model_parallel_size()
        
        # Only use fused QKV when num_heads == num_key_value_heads (no GQA)
        if trn_config.fuse_qkv and self.num_heads == self.num_key_value_heads:
            self.qkv_proj = ColumnParallelLinear(
                config.hidden_size,
                3 * self.num_heads * self.head_dim,  # Q + K + V
                stride=3,  # Important for proper sharding
                bias=False,
                gather_output=False,
                sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
                dtype=config.torch_dtype,
            )
            
            # Define transformation specs for fused QKV
            self.specs = ModelWeightTransformationSpecs()
            self.specs.add_spec(
                FusedLinearsSpec(
                    fused_linear_name="qkv_proj",
                    linear_names=["q_proj", "k_proj", "v_proj"],
                    bias=False,
                    fuse_axis="column",
                    original_dims=[self.num_heads * self.head_dim] * 3,
                )
            )
            self.split_size = self.num_heads * self.head_dim // tp_size
```

**3. GQA QKV Projection (Required for Challenging TP Configurations)**
```python
class YourModelAttention(nn.Module, CustomModule):
    def __init__(self, config, trn_config, layer_idx):
        super().__init__()
        # ... (same setup as above)
        
        tp_size = get_tensor_model_parallel_size()
        
        # Use GQA QKV when KV heads can't be evenly distributed across TP ranks
        # This happens when: num_key_value_heads < tp_size or num_key_value_heads % tp_size != 0
        self.qkv_linear = (self.num_key_value_heads < tp_size) or (self.num_key_value_heads % tp_size != 0)
        
        if self.qkv_linear:
            # Calculate KV size multiplier to ensure even distribution
            if trn_config.kv_size_multiplier is None:
                self.kv_size_multiplier = trn_config.auto_kv_size_multiplier(self.num_key_value_heads)
            else:
                self.kv_size_multiplier = trn_config.kv_size_multiplier
                
            self.qkv_proj = GQAQKVColumnParallelLinear(
                config.hidden_size,
                [self.num_heads * self.head_dim, self.num_key_value_heads * self.head_dim],
                bias=False,
                gather_output=False,
                sequence_parallel_enabled=trn_config.sequence_parallel_enabled,
                kv_size_multiplier=self.kv_size_multiplier,
                fuse_qkv=trn_config.fuse_qkv,
                dtype=config.torch_dtype,
            )
            
            # Define transformation specs for GQA QKV
            self.specs = ModelWeightTransformationSpecs()
            self.specs.add_spec(
                GQAQKVColumnParallelLinearSpec(
                    gqa_qkv_projection_name="qkv_proj",
                    query_projection_name="q_proj",
                    key_projection_name="k_proj", 
                    value_projection_name="v_proj",
                    output_projection_name="o_proj",
                    num_attention_heads=self.num_heads,
                    num_key_value_heads=self.num_key_value_heads,
                    kv_size_multiplier=self.kv_size_multiplier,
                    q_output_size_per_partition=self.qkv_proj.q_output_size_per_partition,
                    kv_output_size_per_partition=self.qkv_proj.kv_output_size_per_partition,
                    fuse_qkv=trn_config.fuse_qkv,
                )
            )
```

**When to Use Each Variant:**

- **Separate Q, K, V**: Default approach, works for all configurations but may be less efficient
- **Fused QKV**: Use when `num_heads == num_key_value_heads` (no grouped query attention) and `fuse_qkv=True`
- **GQA QKV**: Required when using grouped query attention with challenging tensor parallel configurations where KV heads cannot be evenly distributed across TP ranks

The choice is typically determined by:
```python
tp_size = get_tensor_model_parallel_size()
use_gqa_qkv = (num_key_value_heads < tp_size) or (num_key_value_heads % tp_size != 0)
use_fused_qkv = trn_config.fuse_qkv and (num_heads == num_key_value_heads) and not use_gqa_qkv
```

### Step 3: Implement Main Model Classes

#### Base Model
```python
class YourPreTrainedModel(PreTrainedModel, NeuronModelMixin):
    config_class = YourModelConfig
    base_model_prefix = "model"
    supports_gradient_checkpointing = True
    _no_split_modules = ["YourModelDecoderLayer"]
    _skip_keys_device_placement = "past_key_values"
    _supports_flash_attn_2 = True
    _supports_cache_class = True
    _supports_quantized_cache = True
    _supports_static_cache = True


class YourModel(NeuronModelMixin, YourPreTrainedModel):
    def __init__(self, config: YourModelConfig, trn_config: TrainingNeuronConfig):
        YourPreTrainedModel.__init__(self, config)
        self.padding_idx = config.pad_token_id
        self.vocab_size = config.vocab_size

        self.trn_config = trn_config
        
        self.embed_tokens = ParallelEmbedding(...)
        self.layers = nn.ModuleList([
            YourModelDecoderLayer(config, trn_config, layer_idx)
            for layer_idx in range(config.num_hidden_layers)
        ])
        self.norm = YourModelRMSNorm(...)
        
        self.post_init()
```

#### CausalLM Model
```python
class YourModelForCausalLM(NeuronModelMixin, YourPreTrainedModel):
    _tied_weights_keys = ["lm_head.weight"]
    
    # Pipeline parallelism support
    SUPPORTS_PIPELINE_PARALLELISM = True
    PIPELINE_TRANSFORMER_LAYER_CLS = YourModelDecoderLayer
    PIPELINE_INPUT_NAMES = ["input_ids", "attention_mask"]
    
    def __init__(self, config, trn_config):
        super().__init__(config)
        self.trn_config = trn_config
        self.model = YourModel(config, trn_config)
        self.vocab_size = config.vocab_size
        
        self.lm_head = ColumnParallelLinear(
            config.hidden_size,
            config.vocab_size,
            bias=False,
            gather_output=False,
            dtype=config.torch_dtype,
        )
        
        self.post_init()
```

### Step 4: Register Model

Update `optimum/neuron/models/training/__init__.py`:
```python
from .your_model import YourModelForCausalLM, YourModel

__all__ = [..., "YourModelForCausalLM", "YourModel"]
```

Update `optimum/neuron/models/training/auto_models.py`:
```python
from .your_model.modeling_your_model import YourModelForCausalLM, YourModel

# Register the base model (without head)
register_neuron_model_for_training("your_model", "model")(YourModel)

# Register the CausalLM model
register_neuron_model_for_training("your_model", "text-generation")(YourModelForCausalLM)
```

Here `"your_model"` is the corresponding to the `model_type` attribute of your model's configuration class.

## Best Practices

### 1. Parallel Layer Configuration
- Use `gather_output=False` for intermediate layers
- Set `input_is_parallel=True` for layers that receive parallel input
- Configure `sequence_parallel_enabled` consistently across layers
- Use appropriate `stride` values for proper weight sharding

### 2. Weight Transformation Specs
- Always define specs for modules that use fused or parallel layers
- Use `CustomModule` mixin for any module with transformation specs
- Ensure spec parameter names match the actual module structure
- Test both regular and LoRA weight transformations

### 3. Pipeline Parallelism
- Set `SUPPORTS_PIPELINE_PARALLELISM = True` for supported models
- Define `PIPELINE_TRANSFORMER_LAYER_CLS` as your decoder layer class
- List all input names in `PIPELINE_INPUT_NAMES`

### 4. Flash Attention Support
- Set `_supports_flash_attn_2 = True` if your model supports it
- Implement both eager and flash attention paths
- Use appropriate attention function dispatching

## Testing Your Implementation

The training tests in `tests/training/` provide a comprehensive testing framework that validates numerical correctness, distributed training scenarios, and checkpoint compatibility.
Most of the tests are not designed to be run on every custom modeling implementation, but rather to validate the core functionality of the Optimum Neuron training infrastructure.
With that in mind, here's what you need to implement for your custom modeling:

### 1. Custom Modeling Validation

The `test_custom_modeling.py` file validates that your custom implementation produces identical outputs to the original Transformers model:

Update `tests/training/test_custom_modeling.py`:
```python
CUSTOM_MODELINGS_TO_TEST = [
    # ... existing models ...
    ("YourModelForCausalLM", "your-org/your-model-name"),
]
```

**Important**: For custom modeling validation tests, use small/tiny models to ensure CI efficiency. The test models should have:
- Small vocabulary size (e.g., 1000-8000 tokens)
- Few layers (e.g., 2-4 layers)
- Small hidden dimensions (e.g., 128-512)
- Minimal attention heads (e.g., 4-8 heads)

Examples of good test models for custom modeling validation:
- `"michaelbenayoun/llama-2-tiny-4kv-heads-4layers-random"` - 4 layers, 4 KV heads
- `"michaelbenayoun/granite-tiny-4kv-heads-4layers-random"` - Tiny Granite model
- `"michaelbenayoun/qwen3-tiny-4kv-heads-4layers-random"` - Tiny Qwen3 model

Key test your model must pass:
```python
def test_custom_modeling_matches_original()  # Output matching
```

- **Numerical Correctness**: Ensures custom models match Transformers outputs exactly
- **Parallelization Support**: Tests various QKV implementations (regular, fused, GQA)

### 2. End-to-End Training Validation

The `test_overfit.py` file validates training convergence. To include your model in end-to-end training validation, you must add it to the parametrized test cases:

Update `tests/training/test_overfit.py`:
```python
@pytest.mark.parametrize(
    "model_class_name,model_name_or_path,learning_rate,warmup_ratio,training_kwargs,use_flash_attention_2,max_expected_loss,max_length,num_steps",
    [
        # ... existing models ...
        [
            "YourModelForCausalLM",
            "your-org/your-model-name",
            1e-4,
            0.03,
            {},
            True,
            0.5,
            2048,
            50,
        ],
    ],
    ids=[
        # ... existing model IDs ...
        "your-org/your-model-name",
    ],
)
```

This validates:
- **Convergence Validation**: Ensures models can overfit simple datasets

Your model will be tested in:
```python
def test_overfit_custom_modeling_causal_lm()       # Basic training (your model included)
```

### 3. Auto Model Loading

The `test_modeling_auto.py` file validates that your model can be loaded using the `NeuronModel` and `NeuronModelForCausalLM` auto classes. To include your model in these tests, you must add it to the test cases:

Update `tests/training/test_modeling_auto.py`:
```python
@pytest.mark.parametrize("from_pretrained", [False, True], ids=["from_config", "from_pretrained"])
@distributed_test(world_size=1)
@is_trainium_test
def test_auto_model_with_supported_architecture(from_pretrained):
    trn_config = TrainingNeuronConfig()
    kwargs = {"torch_dtype": torch.bfloat16}
    for model_name_or_path in [
        "michaelbenayoun/llama-2-tiny-4kv-heads-4layers-random",
        "michaelbenayoun/granite-tiny-4kv-heads-4layers-random", 
        "michaelbenayoun/qwen3-tiny-4kv-heads-4layers-random",
        "your-org/your-model-name",  # Add your model here
    ]:
        # ... rest of test logic

@pytest.mark.parametrize("from_pretrained", [False, True], ids=["from_config", "from_pretrained"])
@distributed_test(world_size=1)
@is_trainium_test
def test_auto_model_for_causal_lm_with_supported_architecture(from_pretrained):
    trn_config = TrainingNeuronConfig()
    kwargs = {"torch_dtype": torch.bfloat16}
    for model_name_or_path in [
        "michaelbenayoun/llama-2-tiny-4kv-heads-4layers-random",
        "michaelbenayoun/granite-tiny-4kv-heads-4layers-random",
        "michaelbenayoun/qwen3-tiny-4kv-heads-4layers-random", 
        "your-org/your-model-name",  # Add your model here
    ]:
        # ... rest of test logic
```

This validates:
- **Auto Model Loading**: Tests that `NeuronModel.from_pretrained()` and `NeuronModel.from_config()` work correctly
- **Auto CausalLM Loading**: Tests that `NeuronModelForCausalLM.from_pretrained()` and `NeuronModelForCausalLM.from_config()` work correctly

### 4. Running Tests

Tests require AWS Trainium instances. Run specific test categories:

```bash
# Run all custom modeling tests
pytest tests/training/test_custom_modeling.py -v

# Run specific model tests
pytest tests/training/test_custom_modeling.py -v -k "your_model"

# Run end-to-end training validation
pytest tests/training/test_overfit.py -v
```

### 5. Test Requirements

Your implementation must:

1. **Pass numerical correctness tests** against original Transformers implementation
2. **Support parallelization strategies** (at minimum DP and TP; PP support recommended)
3. **Handle various QKV implementations** (regular, fused, GQA)
4. **Support checkpoint consolidation** for distributed training
5. **Support LoRA training** if applicable
6. **Demonstrate convergence** through overfitting tests

The testing framework ensures your custom model maintains compatibility with the existing Optimum Neuron training infrastructure while delivering expected performance and correctness guarantees.

## Common Issues

- **Weight Shape Mismatches**: Ensure transformation specs handle tensor shapes correctly
- **Pipeline Parallelism Errors**: Check that all required attributes are set
- **Memory Issues**: Consider gradient checkpointing and activation recomputation
- **Attention Compatibility**: Verify attention implementations work with your model architecture

## Additional Resources

This guide provides the foundation for implementing custom models. For complete examples and advanced patterns, reference these existing implementations:

- **LLaMA**: `optimum/neuron/models/training/llama/modeling_llama.py` - Complete implementation with (regular, fused and GQA attention), fused MLP
- **Qwen3**: `optimum/neuron/models/training/qwen3/modeling_qwen3.py` - Demonstrates how to adapt the Llama implementation for Qwen3 with `q_norm` and `k_norm` layers

Key files to study:
- `optimum/neuron/models/training/modeling_utils.py` - Base `NeuronModelMixin` class
- `optimum/neuron/models/training/transformations_utils.py` - Weight transformation specifications
- `optimum/neuron/models/training/config.py` - `TrainingNeuronConfig` for parallelism settings

### Adding support for new architectures
https://huggingface.co/docs/optimum.neuron/v0.4.0/contribute/contribute_for_inference.md

# Adding support for new architectures



> **_NOTE:_** ❗This section does not apply to the decoder model’s inference with autoregressive sampling integrated through `transformers-neuronx`. If you want to add support for these models, please open an issue on the Optimum Neuron GitHub repo, and ping maintainers for help.

You want to export and run a new model on AWS Inferentia or Trainium? Check the guideline, and submit a pull request to [🤗 Optimum Neuron's GitHub repo](https://github.com/huggingface/optimum-neuron/)!

To support a new model architecture in the Optimum Neuron library here are some steps to follow:

1. Implement a custom Neuron configuration.
2. Export and validate the model.
3. Contribute to the GitHub repo.

## Implement a custom Neuron configuration

To support the export of a new model to a Neuron compatible format, the first thing to do is to define a Neuron configuration, describing how to export the PyTorch model by specifying:

1. The input names.
2. The output names.
3. The dummy inputs used to trace the model: the Neuron Compiler records the computational graph via tracing and works on the resulting `TorchScript` module.
4. The compilation arguments used to control the trade-off between hardware efficiency (latency, throughput) and accuracy.

Depending on the choice of model and task, we represent the data above with configuration classes. Each configuration class is associated with
a specific model architecture, and follows the naming convention `ArchitectureNameNeuronConfig`. For instance, the configuration that specifies the Neuron
export of BERT models is `BertNeuronConfig`.

Since many architectures share similar properties for their Neuron configuration, 🤗 Optimum adopts a 3-level class hierarchy:

1. Abstract and generic base classes. These handle all the fundamental features, while being agnostic to the modality (text, image, audio, etc).
2. Middle-end classes. These are aware of the modality. Multiple config classes could exist for the same modality, depending on the inputs they support. They specify which input generators should be used for generating the dummy inputs, but remain model-agnostic.
3. Model-specific classes like the `BertNeuronConfig` mentioned above. These are the ones actually used to export models.

### Example: Adding support for ESM models

Here we take the support of [ESM models](https://huggingface.co/docs/transformers/model_doc/esm#esm) as an example. Let's create an `EsmNeuronConfig` class in the `optimum/exporters/neuron/model_configs.py`.

When an Esm model interprets as a text encoder, we are able to inherit from the middle-end class [`TextEncoderNeuronConfig`](https://github.com/huggingface/optimum-neuron/blob/v0.0.18/optimum/exporters/neuron/config.py#L36).
Since the modeling and configuration of Esm is almost the same as BERT when it is interpreted as an encoder, we can use the `NormalizedConfigManager` with `model_type=bert` to normalize the configuration to generate dummy inputs for tracing the model.

And one last step, since `optimum-neuron` is an extension of `optimum`, we need to register the Neuron config that we create to the [TasksManager](https://huggingface.co/docs/optimum/main/en/exporters/task_manager#optimum.exporters.TasksManager) with the `register_in_tasks_manager` decorator by specifying the model type and supported tasks.

```python

@register_in_tasks_manager("esm", *["feature-extraction", "fill-mask", "text-classification", "token-classification"])
class EsmNeuronConfig(TextEncoderNeuronConfig):
    NORMALIZED_CONFIG_CLASS = NormalizedConfigManager.get_normalized_config_class("bert")
    ATOL_FOR_VALIDATION = 1e-3  # absolute tolerance to compare for comparing model on CPUs

    @property
    def inputs(self) -> List[str]:
        return ["input_ids", "attention_mask"]

```

## Export and validate the model

With the Neuron configuration class that you implemented, now do a quick test if it works as expected:

* Export

```bash
optimum-cli export neuron --model facebook/esm2_t33_650M_UR50D --task text-classification --batch_size 1 --sequence_length 16 esm_neuron/
```

During the export [`validate_model_outputs`](https://github.com/huggingface/optimum-neuron/blob/7b18de9ddfa5c664c94051304c651eaf855c3e0b/optimum/exporters/neuron/convert.py#L136) will be called to validate the outputs of your exported Neuron model by comparing them to the results of PyTorch on the CPU. You could also validate the model manually with:

```python
from optimum.exporters.neuron import validate_model_outputs

validate_model_outputs(
    neuron_config, base_model, neuron_model_path, neuron_named_outputs, neuron_config.ATOL_FOR_VALIDATION
)
```

* Inference (optional)

```python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForSequenceClassification

model = NeuronModelForSequenceClassification.from_pretrained("esm_neuron/")
tokenizer = AutoTokenizer.from_pretrained("esm_neuron/")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
logits = model(**inputs).logits
```

## Contribute to the GitHub repo

We are almost all set. Now submit a pull request to make your work accessible to all community members!

* Open an issue in the [Optimum Neuron GitHub repo](https://github.com/huggingface/optimum-neuron/issues) to describe the new feature and make it visible to Optimum Neuron's maintainers.
* Add the model to the exporter test in [`optimum-neuron/tests/exporters/exporters_utils.py`](https://github.com/huggingface/optimum-neuron/blob/v0.0.18/tests/exporters/exporters_utils.py) and the inference test in [`optimum-neuron/tests/inference/inference_utils.py`](https://github.com/huggingface/optimum-neuron/blob/v0.0.18/tests/inference/inference_utils.py).
* Open a pull request! (Don't forget to link it to the issue you opened, so that the maintainers could better track it and provide help when needed.)


<Tip>

We usually test smaller checkpoints to accelerate the CIs, you could find tiny models for testing under the [`Hugging Face Internal Testing Organization`](https://huggingface.co/hf-internal-testing).

</Tip>

You have made a new model accessible on Neuron for the community! Thanks for joining us in the endeavor of democratizing good machine learning 🤗.

### Export a model to Neuron
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/export_model.md

# Export a model to Neuron

## Summary

Exporting a PyTorch model to Neuron model is as simple as

```bash
optimum-cli export neuron \
  --model bert-base-uncased \
  --sequence_length 128 \
  --batch_size 1 \
  bert_neuron/
```

Check out the help for more options:

```bash
optimum-cli export neuron --help
```

## Why compile to Neuron model?

AWS provides two generations of the Trainium/Inferentia accelerator built for machine learning inference with higher throughput, lower latency but lower cost: [inf2 (NeuronCore-v2)](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/inf2-arch.html) and [inf1 (NeuronCore-v1)](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/inf1-arch.html#aws-inf1-arch).

In production environments, to deploy 🤗 [Transformers](https://huggingface.co/docs/transformers/index) models on Neuron devices, you need to compile your models and export them to a serialized format before inference. Through Ahead-Of-Time (AOT) compilation with Neuron Compiler( [neuronx-cc](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/compiler/neuronx-cc/index.html) or [neuron-cc](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/compiler/neuron-cc/neuron-cc.html) ), your models will be converted to serialized and optimized [TorchScript modules](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html).

Although pre-compilation avoids overhead during the inference, a compiled Neuron model has some limitations:
* The input shapes and data types used during the compilation cannot be changed.
* Neuron models are specialized for each hardware and SDK version, which means:
  * Models compiled with Neuron can no longer be executed in non-Neuron environment.
  * Models compiled for inf1 (NeuronCore-v1) are not compatible with inf2 (NeuronCore-v2), and vice versa.
  * Models compiled for an SDK version are (generally) not compatible with another SDK version.

In this guide, we'll show you how to export your models to serialized models optimized for Neuron devices.

<Tip>

🤗 Optimum provides support for the Neuron export by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are designed to be easily extendable to other architectures.

**To check the supported architectures, go to the [configuration reference page](../package_reference/configuration).**

</Tip>

## Exporting a model to Neuron using the CLI

To export a 🤗 Transformers model to Neuron, you'll first need to install some extra dependencies:

**For Inf2**

```bash
pip install optimum-neuron[neuronx]
```

**For Inf1**

```bash
pip install optimum-neuron[neuron]
```

The Optimum Neuron export can be used through Optimum command-line:

```bash
optimum-cli export neuron --help
```

### Exporting standard (non-LLM) models

Most models present on the Hugging Face hub can be straightforwardly exported using torch trace, then converted to serialized and optimized TorchScript modules.

<Tip>

<img title="Compilation flow" alt="Compilation flow" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/optimum/neuron/inf_compile_flow.png">

**NEFF**: Neuron Executable File Format which is a binary executable on Neuron devices.
</Tip>

When exporting a model, two sets of export arguments must be passed:

- `compiler_args` are optional arguments for the compiler, these arguments usually control how the compiler makes tradeoff between the inference performance (latency and throughput) and the accuracy,
- `input_shapes` are mandatory static shape information that you need to send to the neuron compiler.

Please type the following command to see all export parameters:

```bash
optimum-cli export neuron -h
```

Exporting a standard NLP model can be done as follows:

```bash
optimum-cli export neuron --model distilbert-base-uncased-distilled-squad \
                          --batch_size 1 --sequence_length 16 \
                          --auto_cast matmul --auto_cast_type fp16 \
                          distilbert_base_uncased_squad_neuron/
```

Here the model was exported with a static input shape of `(1, 16)`, and with compiler arguments specifying
that matmul operation must be performed with `float16` precision for faster inference.

<Tip>

You can also compile the model on a CPU-only instance. In this case, specify the target instance type by passing `--instance_type` from `{inf2, trn1, trn1n, trn2}`. 

If you are using a `NeuronModelForXXX` class to export the model on a CPU-only instance, you must define an environment variable `NEURON_PLATFORM_TARGET_OVERRIDE` before importing anything from the `neuronx_distributed` library, and specify the target instance type. For example:

```python
import os
os.environ["NEURON_PLATFORM_TARGET_OVERRIDE"] = "inf2"
```

</Tip>

After export, you should see the following logs which validate the model on Neuron devices by comparing with PyTorch model on CPU:

```bash
Validating Neuron model...
        -[✓] Neuron model output names match reference model (last_hidden_state)
        - Validating Neuron Model output "last_hidden_state":
                -[✓] (1, 16, 32) matches (1, 16, 32)
                -[✓] all values close (atol: 0.0001)
The Neuronx export succeeded and the exported model was saved at: distilbert_base_uncased_squad_neuron/
```

This exports a neuron-compiled TorchScript module of the checkpoint defined by the `--model` argument.

As you can see, the task was automatically detected. This was possible because the model was on the Hub. For local models, providing the `--task` argument is needed or it will default to the model architecture without any task specific head:

```bash
optimum-cli export neuron --model local_path --task question-answering --batch_size 1 --sequence_length 16 --dynamic-batch-size distilbert_base_uncased_squad_neuron/
```

Note that providing the `--task` argument for a model on the Hub will disable the automatic task detection. The resulting `model.neuron` file, can then be loaded and run on Neuron devices.

For each model architecture, you can find the list of supported tasks via the `~exporters.tasks.TasksManager`. For example, for DistilBERT, for the Neuron export, we have:

```python
>>> from optimum.exporters.tasks import TasksManager
>>> from optimum.exporters.neuron.model_configs import *  # Register neuron specific configs to the TasksManager

>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type("distilbert", "neuron").keys())
>>> print(distilbert_tasks)
['feature-extraction', 'fill-mask', 'multiple-choice', 'question-answering', 'text-classification', 'token-classification']
```

You can then pass one of these tasks to the `--task` argument in the `optimum-cli export neuron` command, as mentioned above.

Once exported, the neuron model can be used for inference directly with the `NeuronModelForXXX` class:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english_neuron/")
>>> model = NeuronModelForSequenceClassification.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english_neuron/")

>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")
>>> logits = model(**inputs).logits
>>> print(model.config.id2label[logits.argmax().item()])
'POSITIVE'
```

As you see, there is no need to pass the neuron arguments used during the export as they are
saved in a `config.json` file, and will be restored automatically by `NeuronModelForXXX` class.

<Tip>
Be careful, inputs are always padded to the shapes used for the compilation, and the padding brings computation overhead.
Adjust the static shapes to be higher than the shape of the inputs that you will feed into the model during the inference, but not much more.
</Tip>

### Exporting Stable Diffusion to Neuron

With the Optimum CLI you can compile components in the Stable Diffusion pipeline to gain acceleration on neuron devices during the inference.

So far, we support the export of following components in the pipeline:

* CLIP text encoder
* U-Net
* VAE encoder
* VAE decoder

<Tip>

"These blocks are chosen because they represent the bulk of the compute in the pipeline, and performance benchmarking has shown that running them on Neuron yields significant performance benefit."

Besides, don't hesitate to tweak the compilation configuration to find the best tradeoff between performance v.s accuracy in your use case. By default, we suggest casting FP32 matrix multiplication operations to BF16 which offers good performance with moderate sacrifice of the accuracy. Check out the guide from [AWS Neuron documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html#neuronx-cc-training-mixed-precision) to better understand the options for your compilation.

</Tip>

Exporting a stable diffusion checkpoint can be done using the CLI:

```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-2-1-base \
  --task stable-diffusion \
  --batch_size 1 \
  --height 512 `# height in pixels of generated image, eg. 512, 768` \
  --width 512 `# width in pixels of generated image, eg. 512, 768` \
  --num_images_per_prompt 4 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron/
```

### Exporting Stable Diffusion XL to Neuron

Similar to Stable Diffusion, you will be able to use Optimum CLI to compile components in the SDXL pipeline for inference on neuron devices.

We support the export of following components in the pipeline to boost the speed:

* Text encoder
* Second text encoder
* U-Net (a three times larger UNet than the one in Stable Diffusion pipeline)
* VAE encoder
* VAE decoder

<Tip>

"Stable Diffusion XL works especially well with images between 768 and 1024."

</Tip>

Exporting a SDXL checkpoint can be done using the CLI:

```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \
  --task stable-diffusion-xl \
  --batch_size 1 \
  --height 1024 `# height in pixels of generated image, eg. 768, 1024` \
  --width 1024 `# width in pixels of generated image, eg. 768, 1024` \
  --num_images_per_prompt 4 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron/
```

### Exporting LLMs to Neuron

Just like the standard NLP models, you need to specify static parameters when exporting an LLM model:

- `batch_size` is the number of input sequences that the model will accept. Defaults to 1,
- `sequence_length` is the maximum number of tokens in an input sequence. Defaults to `max_position_embeddings` (`n_positions` for older models).
- `auto_cast_type` specifies the format to encode the weights. It can be one of `fp32` (`float32`), `fp16` (`float16`) or `bf16` (`bfloat16`). Defaults to `fp32`.
- `tensor_parallel_size` is the number of neuron cores used when instantiating the model. Each neuron core has 16 Gb of memory, which means that
bigger models need to be split on multiple cores. Defaults to 1,

```bash
optimum-cli export neuron --model meta-llama/Llama-3.2-1B \
  --batch_size 1 \
  --sequence_length 4096 \
  --auto_cast_type bf16 \
  --tensor_parallel_size 2 \
  llama3_neuron/
```

<Tip>
The export of LLM models can take much longer than standard models (sometimes more than one hour).
</Tip>

As explained before, the neuron model parameters are static.
This means in particular that during inference:

- the `batch_size` of the inputs should be lower to the `batch_size` used during export,
- the `length` of the input sequences should be lower than the `sequence_length` used during export,
- the maximum number of tokens (input + generated) cannot exceed the `sequence_length` used during export.

Once exported, neuron llm models can simply be reloaded using the `NeuronModelForCausalLM` class.
As with the original transformers models, use `generate()` instead of `forward()` to generate text sequences.

```diff
from transformers import AutoTokenizer
-from transformers import AutoModelForCausalLM
+from optimum.neuron import NeuronModelForCausalLM

# Instantiate and convert to Neuron a PyTorch checkpoint
-model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
+model = NeuronModelForCausalLM.from_pretrained("./llama3-neuron")

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
tokenizer.pad_token_id = tokenizer.eos_token_id

tokens = tokenizer("I really wish ", return_tensors="pt")
with torch.inference_mode():
    sample_output = model.generate(
        **tokens,
        do_sample=True,
        max_new_tokens=256,
        temperature=0.7,
    )
    outputs = [tokenizer.decode(tok) for tok in sample_output]
    print(outputs)
```

The generation is highly configurable. Please refer to https://huggingface.co/docs/transformers/generation_strategies for details.

Please be aware that:

- for each model architecture, default values are provided for all parameters, but values passed to the `generate` method will take precedence,
- the generation parameters can be stored in a `generation_config.json` file. When such a file is present in model directory,
it will be parsed to set the default parameters (the values passed to the `generate` method still take precedence).

## Exporting neuron models using NeuronX TGI

The NeuronX TGI image includes not only NeuronX runtime, but also all packages and tools required to export Neuron models.

Use the following command to export a model to Neuron using a TGI image:

```
docker run --entrypoint optimum-cli \
       -v $(pwd)/data:/data \
       --privileged \
       ghcr.io/huggingface/neuronx-tgi:latest \
       export neuron \
       --model <organization>/<model> \
       --batch_size 1 \
       --sequence_length 4096 \
       --auto_cast_type fp16 \
       --tensor_parallel_size 2 \
       /data/<neuron_model_path>
```

The exported model will be saved under `./data/<neuron_model_path>`.

## Exporting options and Docker / SageMaker environment variables

You must make sure that the options used for compilation match the options used for deployment.

You can see examples of these parameters in the .env and docker-compose.yaml files in the [TGI Neuron backend documentation](https://github.com/huggingface/text-generation-inference/blob/main/docs/source/backends/neuron.md).

For Docker and SageMaker, you can see these reflected in the following options and their optimum-cli equivalent:

```MODEL_ID = model
HF_AUTO_CAST_TYPE = auto_cast_type
MAX_BATCH_SIZE = batch_size
MAX_TOTAL_TOKENS = sequence_length
HF_NUM_CORES = num_cores
```

### Introduction
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/benchmark.md

## Introduction

In todays world, mostly every AI Engineer is familiar with running inference by simply making an API call, but how is that request served optimally by the backend? How does the model provider or service you are using ensure latency and throughput requirements are met?

In this blog we will cover how to serve a model using Optimum Neuron on AWS Inferentia2 with the HuggingFace TGI container. I'll also delve into how to optimize for latency and throughput and what decisions we can make to influence our priorities.

## Understanding the Tools

* Inferentia2 chips: Inferentia2 is the second generation AWS purpose-built Machine Learning inference accelerator.
* Optimum Neuron: The interface between the 🤗 Transformers library and AWS Accelerators including AWS Trainium and AWS Inferentia.
* Text Generation Inference (TGI) container: Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs).
* GuideLLM: A tool for evaluating and optimizing the deployment of large language models (LLMs).

The instance I am using for this experiment will be `inf2.48xlarge`. I can check instance type as well as see each device by running `neuron-ls` which gives the following output:

```
instance-type: inf2.48xlarge
+--------+--------+--------+-----------+---------+
| NEURON | NEURON | NEURON | CONNECTED |   PCI   |
| DEVICE | CORES  | MEMORY |  DEVICES  |   BDF   |
+--------+--------+--------+-----------+---------+
| 0      | 2      | 32 GB  | 11, 1     | 80:1e.0 |
| 1      | 2      | 32 GB  | 0, 2      | 90:1e.0 |
| 2      | 2      | 32 GB  | 1, 3      | 80:1d.0 |
| 3      | 2      | 32 GB  | 2, 4      | 90:1f.0 |
| 4      | 2      | 32 GB  | 3, 5      | 80:1f.0 |
| 5      | 2      | 32 GB  | 4, 6      | 90:1d.0 |
| 6      | 2      | 32 GB  | 5, 7      | 20:1e.0 |
| 7      | 2      | 32 GB  | 6, 8      | 20:1f.0 |
| 8      | 2      | 32 GB  | 7, 9      | 10:1e.0 |
| 9      | 2      | 32 GB  | 8, 10     | 10:1f.0 |
| 10     | 2      | 32 GB  | 9, 11     | 10:1d.0 |
| 11     | 2      | 32 GB  | 10, 0     | 20:1d.0 |
+--------+--------+--------+-----------+---------+
```

## Setup and Installation

First, I ran the following commands to install the necessary dependencies, and pull the container needed to compile the model, as well as serve the compiled model for benchmarking.

`!pip install hftransfer guidellm==0.1.0`
`!git clone https://github.com/huggingface/optimum-neuron.git`
`!docker pull ghcr.io/huggingface/text-generation-inference:latest-neuron`

Depending on the model, optionally configure your HF_TOKEN like so:

`!export HF_TOKEN=YOUR_HF_TOKEN`

## Model Compilation and Deployment

For my use case, I needed to compile my model with specific parameters that were unique. It is important to mention that compilation is not always needed. For example, in the event that the already cached configuration would have worked for me, optimum would use that by default.

From the docs: "The Neuron Model Cache is a remote cache for compiled Neuron models in the `neff` format. It is integrated into the `NeuronTrainer` and `NeuronModelForCausalLM` classes to enable loading pretrained models from the cache instead of compiling them locally."

Now I compile the model I have selected, `meta-llama-3.1-8b-instruct` with the following command:

```bash
!docker run -p 8080:80 -e HF_TOKEN=YOUR_TOKEN \
-v $(pwd):/data \
--device=/dev/neuron0 \
--device=/dev/neuron1 \
--device=/dev/neuron2 \
--device=/dev/neuron3 \
--device=/dev/neuron4 \
--device=/dev/neuron5 \
--device=/dev/neuron6 \
--device=/dev/neuron7 \
--device=/dev/neuron8 \
--device=/dev/neuron9 \
--device=/dev/neuron10 \
--device=/dev/neuron11 \
-ti \
--entrypoint "optimum-cli" ghcr.io/huggingface/text-generation-inference:latest-neuron \
export neuron --model "meta-llama/Meta-Llama-3.1-8B-Instruct" \
--sequence_length 16512 \
--batch_size 8 \
--num_cores 8 \
/data/exportedmodel/
```

Take note that for my use case, I have decided to use a batch size of 8, with a tensor parallel degree of 8. Since an inf2.48xlarge has 24 cores, I can use a data parallel of 3, which means I will have 3 copies of my model across the instance.`

## Optimizing Batch Size for Maximum Throughput

When optimizing hardware utilization for cost-efficiency, particularly for the inf2.48xlarge instance at $12.98 per hour on-demand, the roofline model is a valuable framework.

The roofline model defines theoretical performance bounds. On one extreme, memory-bound workloads are limited by memory capacity, necessitating frequent read/write operations. On the other, compute-bound workloads fully utilize the accelerator's compute capabilities, maximizing on-device data processing.
Batch size is a key lever for controlling this balance. Larger batch sizes tend to shift workloads towards being compute-bound, while smaller batch sizes may result in more memory-bound operations.
With that stated, maximizing batch size is not always viable. Keeping in mind max batch size for the specified latency budget (the time we want to take to return a response) is paramount.
This is most directly controlled with batch size. For more information on this topic, check out this resource:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-features/neuroncore-batching.html

## Creating Files for Serving

Several files are needed to ensure our configuration is setup properly, and that the model I compiled is used rather than the cached configuration.

First I'll need to create my .env file, which specifies my batch size, precision, etc. It is important to note, that since I compiled my model, I needed to change the model_id from the usual huggingface repo designation, to the container volume location I specified within the compilation command.

```
MODEL_ID='/data/exportedmodel'
HF_AUTO_CAST_TYPE='bf16'
MAX_BATCH_SIZE=8
MAX_INPUT_TOKENS=16000
MAX_TOTAL_TOKENS=16512
```

Next, I create the benchmark.sh script with my desired settings:

```bash
#!/bin/bash

model=${1:-meta-llama/Meta-Llama-3.1-8B-Instruct}

date_str=$(date '+%Y-%m-%d-%H-%M-%S')
output_path="${model//\//_}#${date_str}_guidellm_report.json"

export HF_TOKEN=YOUR_TOKEN

export GUIDELLM__NUM_SWEEP_PROFILES=1
export GUIDELLM__MAX_CONCURRENCY=128
export GUIDELLM__REQUEST_TIMEOUT=60

guidellm \
 --target "http://localhost:8080/v1" \
 --model ${model} \
 --data-type emulated \
 --data "prompt_tokens=15900,prompt_tokens_variance=100,generated_tokens=450,generated_tokens_variance=50" \
 --output-path ${output_path} \
 ```

Take note of the parameters passed via the `--data` flag. As my use case is for long prompts and long generation, I have set `prompt_tokens` and `generated_tokens accordingly. Remember to set these according to your use case and the input / output token load you expect.
Based on these numbers, GuideLLM will generate prompts of random sizes in a normal distribution of around 15900 tokens, and ask for a random number of generated tokens in a normal distribution of around 450 tokens.

The docker compose file is important for defining your data parallel, by specifying the number of devices I wish to allocate to each container. This is also where I specify the load balancer.

```
version: '3.7'

services:
 tgi-1:
 image: ghcr.io/huggingface/text-generation-inference:latest-neuron
 ports:
 - "8081:8081"
 volumes:
 - $PWD:/data
 environment:
 - PORT=8081
 - MODEL_ID=${MODEL_ID}
 - HF_AUTO_CAST_TYPE=${HF_AUTO_CAST_TYPE}
 - HF_NUM_CORES=8
 - MAX_BATCH_SIZE=${MAX_BATCH_SIZE}
 - HF_TOKEN=YOUR_TOKEN
 - MAX_INPUT_TOKENS=${MAX_INPUT_TOKENS}
 - MAX_TOTAL_TOKENS=${MAX_TOTAL_TOKENS}
 - MAX_CONCURRENT_REQUESTS=512
 devices:
 - "/dev/neuron0"
 - "/dev/neuron1"
 - "/dev/neuron2"
 - "/dev/neuron3"

 tgi-2:
 image: ghcr.io/huggingface/text-generation-inference:latest-neuron
 ports:
 - "8082:8082"
 volumes:
 - $PWD:/data
 environment:
 - PORT=8082
 - MODEL_ID=${MODEL_ID}
 - HF_AUTO_CAST_TYPE=${HF_AUTO_CAST_TYPE}
 - HF_NUM_CORES=8
 - MAX_BATCH_SIZE=${MAX_BATCH_SIZE}
 - HF_TOKEN=YOUR_TOKEN
 - MAX_INPUT_TOKENS=${MAX_INPUT_TOKENS}
 - MAX_TOTAL_TOKENS=${MAX_TOTAL_TOKENS}
 - MAX_CONCURRENT_REQUESTS=512
 devices:
 - "/dev/neuron4"
 - "/dev/neuron5"
 - "/dev/neuron6"
 - "/dev/neuron7"

 tgi-3:
 image: ghcr.io/huggingface/text-generation-inference:latest-neuron
 ports:
 - "8083:8083"
 volumes:
 - $PWD:/data
 environment:
 - PORT=8083
 - MODEL_ID=${MODEL_ID}
 - HF_AUTO_CAST_TYPE=${HF_AUTO_CAST_TYPE}
 - HF_NUM_CORES=8
 - MAX_BATCH_SIZE=${MAX_BATCH_SIZE}
 - HF_TOKEN=YOUR_TOKEN
 - MAX_INPUT_TOKENS=${MAX_INPUT_TOKENS}
 - MAX_TOTAL_TOKENS=${MAX_TOTAL_TOKENS}
 - MAX_CONCURRENT_REQUESTS=512
 devices:
 - "/dev/neuron8"
 - "/dev/neuron9"
 - "/dev/neuron10"
 - "/dev/neuron11"

 loadbalancer:
 image: nginx:alpine
 ports:
 - "8080:80"
 volumes:
 - ./nginx.conf:/etc/nginx/nginx.conf:ro
 depends_on:
 - tgi-1
 - tgi-2
 - tgi-3
 deploy:
 placement:
 constraints: [node.role == manager]
```

Lastly, I define the nginx.conf for the load balancer:

```
### Nginx TGI Load Balancer
events {}
http {
 upstream tgicluster {
    server tgi-1:8081;
    server tgi-2:8082;
    server tgi-3:8083;
 }
 server {
    listen 80;
    location / {
    proxy_pass http://tgicluster;
    }
 }
}
```

## Benchmarking with GuideLLM

Now that I have defined the necessary files, I start serving my optimum-neuron model with TGI backend.

`!docker compose -f docker-compose.yaml --env-file .env up`

As a sanity check, I can watch the output of the above command to ensure that each container starts properly as well as the load balancer.
Once I have started the containers successfully, I can begin benchmarking using the previously defined benchmarking script.

`!benchmark.sh "meta-llama/Meta-Llama-3.1-8B-Instruct"`

A colorful stdout will begin to populate the terminal as guidellm begins to test your model serving setup.

## Performance Analysis

In approximately 15-20 minutes, benchmarking is completed and displays the following detailed breakdown in the terminal:

```
╭─ Benchmarks ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ [15:02:17] 100% synchronous (0.10 req/sec avg)│
│ [15:04:17] 100% throughput (0.85 req/sec avg)│
│ [15:05:25] 100% constant@0.85 req/s (0.77 req/sec avg) │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
 Generating report... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ (3/3) [ 0:05:04 < 0:00:00 ]
╭─ GuideLLM Benchmarks Report (meta-llama_Meta-Llama-3.1-8B-Instruct#2025-05-27-15-02-11_guidellm_report.json) ──────────────────────────────────────────────────────────────────────────────────╮
│ ╭─ Benchmark Report 1 ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │
│ │ Backend(type=openai_server, target=http://localhost:8080/v1, model=meta-llama/Meta-Llama-3.1-8B-Instruct) │ │
│ │ Data(type=emulated, source=prompt_tokens=15900,prompt_tokens_variance=100,generated_tokens=450,generated_tokens_variance=50, tokenizer=meta-llama/Meta-Llama-3.1-8B-Instruct) │ │
│ │ Rate(type=sweep, rate=None) │ │
│ │ Limits(max_number=None requests, max_duration=120 sec) │ │
│ │ │ │
│ │ │ │
│ │ Requests Data by Benchmark │ │
│ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━┓ │ │
│ │ ┃ Benchmark ┃ Requests Completed ┃ Request Failed ┃ Duration ┃ Start Time ┃ End Time ┃ │ │
│ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━┩ │ │
│ │ │ synchronous │ 11/11 │ 0/11 │ 113.56 sec │ 15:02:17 │ 15:04:11 │ │ │
│ │ │ asynchronous@0.85 req/sec │ 88/88 │ 0/88 │ 114.59 sec │ 15:05:25 │ 15:07:19 │ │ │
│ │ │ throughput │ 55/55 │ 0/55 │ 64.83 sec │ 15:04:17 │ 15:05:22 │ │ │
│ │ └───────────────────────────┴────────────────────┴────────────────┴────────────┴────────────┴──────────┘ │ │
│ │ │ │
│ │ Tokens Data by Benchmark │ │
│ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │
│ │ ┃ Benchmark ┃ Prompt ┃ Prompt (1%, 5%, 50%, 95%, 99%) ┃ Output ┃ Output (1%, 5%, 50%, 95%, 99%) ┃ │ │
│ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │
│ │ │ synchronous │ 15902.82 │ 15896.0, 15896.0, 15902.0, 15913.0, 15914.6 │ 293.09 │ 70.3, 119.5, 315.0, 423.5, 443.1 │ │ │
│ │ │ asynchronous@0.85 req/sec │ 15899.06 │ 15877.4, 15879.4, 15898.5, 15918.0, 15919.8 │ 288.75 │ 24.6, 74.1, 298.5, 452.6, 459.1 │ │ │
│ │ │ throughput │ 15899.22 │ 15879.5, 15883.7, 15898.0, 15914.6, 15920.5 │ 294.24 │ 59.1, 114.9, 285.0, 452.9, 456.4 │ │ │
│ │ └───────────────────────────┴──────────┴─────────────────────────────────────────────┴────────┴──────────────────────────────────┘ │ │
│ │ │ │
│ │ Performance Stats by Benchmark │ │
│ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │
│ │ ┃ ┃ Request Latency [1%, 5%, 10%, 50%, 90%, 95%, 99%] ┃ Time to First Token [1%, 5%, 10%, 50%, 90%, 95%, ┃ Inter Token Latency [1%, 5%, 10%, 50%, 90% 95%, ┃ │ │
│ │ ┃ Benchmark ┃ (sec) ┃ 99%] (ms) ┃ 99%] (ms) ┃ │ │
│ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │
│ │ │ synchronous │ 3.68, 5.13, 6.94, 10.91, 13.51, 14.26, 14.87 │ 1563.3, 1569.2, 1576.5, 1589.4, 1594.0, 1595.3, │ 23.2, 28.2, 29.4, 29.8, 30.3, 31.7, 36.5 │ │ │
│ │ │ │ │ 1596.4 │ │ │ │
│ │ │ asynchronous@0.85 req/sec │ 2.62, 6.55, 9.40, 20.66, 30.60, 32.78, 35.07 │ 1594.1, 1602.5, 1605.7, 1629.7, 4650.1, 4924.1, │ 0.2, 0.2, 0.2, 34.3, 44.9, 54.5, 1613.9 │ │ │
│ │ │ │ │ 5345.6 │ │ │ │
│ │ │ throughput │ 18.29, 21.24, 23.81, 44.60, 61.50, 62.80, 63.72 │ 2157.6, 9185.1, 12220.5, 23333.5, 44214.1, │ 28.2, 31.5, 33.1, 39.1, 59.0, 65.2, 1604.6 │ │ │
│ │ │ │ │ 45329.8, 51276.9 │ │ │ │
│ │ └───────────────────────────┴───────────────────────────────────────────────────┴───────────────────────────────────────────────────┴────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Performance Summary by Benchmark │ │
│ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │
│ │ ┃ Benchmark ┃ Requests per Second ┃ Request Latency ┃ Time to First Token ┃ Inter Token Latency ┃ Output Token Throughput ┃ │ │
│ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │
│ │ │ synchronous │ 0.10 req/sec │ 10.32 sec │ 1585.08 ms │ 29.81 ms │ 28.39 tokens/sec │ │ │
│ │ │ asynchronous@0.85 req/sec │ 0.77 req/sec │ 20.77 sec │ 2401.32 ms │ 63.69 ms │ 221.75 tokens/sec │ │ │
│ │ │ throughput │ 0.85 req/sec │ 43.78 sec │ 24624.46 ms │ 65.18 ms │ 249.64 tokens/sec │ │ │
│ │ └───────────────────────────┴─────────────────────┴─────────────────┴─────────────────────┴─────────────────────┴─────────────────────────┘ │ │
│ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```

Unpacking the results, we get quite a few useful data points for us to use. Under the hood, guidellm runs three separate "loads" with which to benchmark the system against.

1. Synchronous - Serving one request at a time
2. Asynchronous - Serving multiple requests at once at a locked in req/sec (0.85 in this case)
3. Throughput - Serving the maximum number of requests that the system can sustain

From these tests we are given several metrics for each like how many requests were successfully performed vs how many failed. The time to first token, prompt input and output sizes and more.
For my experiment, I can see that under max load, I can serve up to 0.85 requests per second at a maximum latency of just under 44 seconds per request. Depending on my latency budget, the next step would be to increase my batch size if I can tolerate longer response times and desire more throughput. Alternatively, I could lower my batch size to decrease the latency, at the cost of potentially reducing throughput.

Lastly, the large input and output tokens required for my workload directly effect the benchmark results, specifically the time needed to encode my input context contributing to most of the benchmark time.

## Conclusion

In this blog post, I took you through how to compile and load an Optimum Neuron model, how to serve it with the HuggingFace Text Generation Inference container, and how to benchmark your settings to optimize for your workload.

## References

https://huggingface.co/docs/optimum-neuron/en/guides/cache_system 
https://github.com/huggingface/optimum-neuron/tree/main/benchmark/text-generation-inference/performance 
https://github.com/vllm-project/guidellm

### Neuron Model Cache
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/cache_system.md

# Neuron Model Cache

## Why Use the Cache?

**Problem**: Neuron compilation takes 30-60 minutes for large models
**Solution**: Download pre-compiled models in seconds

The cache system stores compiled Neuron models on HuggingFace Hub, eliminating recompilation time for your team. When you train or load a model, the system automatically checks for cached versions before starting the expensive compilation process.

**Key Benefits:**
- **Time savings**: download compiled models in seconds vs. hours of compilation
- **Team collaboration**: share compiled models across team members and instances
- **Cost reduction**: avoid repeated compilation costs on cloud instances
- **Automatic operation**: works transparently with existing code

## Quick Start

### Training
```python
from optimum.neuron import NeuronTrainer

# Cache works automatically - no configuration needed
trainer = NeuronTrainer(model=model, args=training_args)
trainer.train()  # Downloads cached models if available
```

### Inference
```python
from optimum.neuron import NeuronModelForCausalLM

# Cache works automatically
model = NeuronModelForCausalLM.from_pretrained("model_id")
```

That's it! The cache works automatically for supported model classes.

## Supported Models

| Model Class | Cache Support | Use Case | Notes |
|-------------|---------------|----------|-------|
| `NeuronTrainer` | ✅ Full | Training | Auto download + upload during training |
| `NeuronModelForCausalLM` | ✅ Full | Inference | Auto download for inference |
| Other `NeuronModelForXXX` | ❌ None | Inference | Use different export mechanism, no cache integration |

<Tip warning={true}>

**Important Limitation**: Models like `NeuronModelForSequenceClassification`, `NeuronModelForQuestionAnswering`, etc. use a different compilation path that doesn't integrate with the cache system. Only `NeuronModelForCausalLM` and training workflows support caching.

</Tip>

## How It Works

The cache system operates on two levels to minimize compilation time:

**Cache Priority** (fastest to slowest):
1. **Local cache** → instant access from `/var/tmp/neuron-compile-cache`
2. **Hub cache** → download in seconds from HuggingFace Hub
3. **Compile from scratch** → 30-60 minutes for large models

**What Gets Cached**: the system caches **NEFF files** (Neuron Executable File Format) - the compiled binary artifacts that run on Neuron cores, not the original model files.

**Cache Identification**: each cached compilation gets a unique hash based on:
- **Model factors**: architecture, precision (fp16/bf16), input shapes, task type
- **Compilation factors**: NeuronX compiler version, number of cores, optimization flags
- **Environment factors**: model checkpoint revision, Optimum Neuron version

This means even small changes to your setup may require recompilation, but identical configurations will always hit the cache.


## Private Cache Setup

The default public cache (`aws-neuron/optimum-neuron-cache`) is **read-only** for users - you can download cached models but cannot upload your own compilations. This public cache only contains models compiled by the Optimum team for common configurations.

For most use cases, you'll want to create a **private cache repository** where you can store your own compiled models.

**Why private cache?**
- **Upload your compilations**: store models you compile for team reuse
- **Private models**: keep proprietary model compilations secure
- **Team collaboration**: share compiled artifacts across team members and CI/CD
- **Custom configurations**: cache models with your specific batch sizes, sequence lengths, etc.

### Method 1: CLI Setup (Recommended)

```bash
# Create private cache repository
optimum-cli neuron cache create

# Set as default cache
optimum-cli neuron cache set your-org/your-cache-name
```

### Method 2: Environment Variable

```bash
# Use for single training run
CUSTOM_CACHE_REPO="your-org/your-cache" python train.py

# Or export for session
export CUSTOM_CACHE_REPO="your-org/your-cache"
```

**Prerequisites:**
- Login: `huggingface-cli login`
- write access to cache repository

## CLI Commands

```bash
# Create new cache repository
optimum-cli neuron cache create [-n NAME] [--public]

# Set default cache repository
optimum-cli neuron cache set REPO_NAME

# Search for cached models
optimum-cli neuron cache lookup MODEL_ID

# Sync local cache with Hub
optimum-cli neuron cache synchronize
```

## Advanced Usage

### Use the Cache in Training Loops

If you do not use the `NeuronTrainer` class, you can still leverage the cache system in your custom training loops. This is useful when you need more control over the training process or when integrating with custom training frameworks while still benefiting from cached compilations.

**When to use this approach:**
- custom training loops that don't fit the `NeuronTrainer` pattern
- advanced optimization scenarios requiring fine-grained control

**Note**: For most use cases, `NeuronTrainer` handles caching automatically and is the recommended approach.

```python
from optimum.neuron.cache import hub_neuronx_cache, synchronize_hub_cache
from optimum.neuron.cache.entries import SingleModelCacheEntry
from optimum.neuron.cache.training import patch_neuron_cc_wrapper

# Create cache entry
cache_entry = SingleModelCacheEntry(model_id, task, config, neuron_config)

# The NeuronX compiler will use the Hugging Face Hub cache system
with patch_neuron_cc_wrapper():
    # The compiler will check the specified remote cache for pre-compiled NEFF files
    with hub_neuronx_cache(entry=cache_entry, cache_repo_id="my-org/cache"):
        model = training_loop()  # Will use specified cache

# Synchronize local cache with Hub
synchronize_hub_cache(cache_repo_id="my-org/cache")
```

### Cache Lookup

The inference cache includes a **registry** that lets you search for compatible pre-compiled models before attempting compilation. This is especially useful for inference where you want to avoid compilation altogether.

```bash
optimum-cli neuron cache lookup meta-llama/Llama-2-7b-chat-hf
```

**Important**: Finding entries doesn't guarantee cache hits. Your exact configuration must match the cached parameters, including compiler version and model revision.

## CI/CD Integration

The cache system works seamlessly in automated environments:

**Environment Variables**: use `CUSTOM_CACHE_REPO` to specify cache repository in CI workflows
```bash
# In your CI configuration
CUSTOM_CACHE_REPO="your-org/your-cache" python train.py
```

**Authentication**: ensure your CI environment has access to your private cache repository:
- Set `HF_TOKEN` environment variable with appropriate read/write permissions
- For GitHub Actions, store as a repository secret

**Best Practices**:
- use separate cache repositories for different environments (dev/staging/prod)
- consider cache repository permissions when setting up automated workflows
- monitor cache repository size in long-running CI workflows

## Troubleshooting

### "Cache repository does not exist"
```txt
Fix: Check repository name and login status
→ huggingface-cli login
→ Verify repo format: org/repo-name
```

### "Graph will be recompiled"
```txt
Cause: No cached model matches your exact configuration
Fix: Use lookup to find compatible configurations
→ optimum-cli neuron cache lookup MODEL_ID
```

### Cache not uploading during training
```txt
Cause: No write permissions to cache repository
Fix: Verify access and authentication
→ huggingface-cli whoami
→ Check cache repo permissions
```

### Slow downloads

```txt
Cause: Large compiled models (GBs) downloading
Fix: Ensure good internet connection
→ Monitor logs for download progress
```

### Clear corrupted local cache
```bash
rm -rf /var/tmp/neuron-compile-cache/*
```

### optimum-neuron plugin for vLLM
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/vllm_plugin.md

# optimum-neuron plugin for vLLM

The `optimum-neuron` package includes a [vLLM](https://docs.vllm.ai/en/latest/) plugin
that registers an 'optimum-neuron' vLLM platform specifically designed to ease the deployment
 of models hosted on the Hugging Face hub to AWS Trainium and Inferentia.

This platform supports two modes of operation:
- it can be used for the inference of pre-exported Neuron models directly from the hub,
- but it allows also the simplified deployment of vanilla models directly without recompilation using [cached artifacts](#hugging-face-neuron-cache).

Notes
- only a relevant subset of all possible configurations for a given model are cached,
- you can use the `optimum-cli` to get all [cached configurations](https://huggingface.co/docs/optimum-neuron/guides/cache_system#neuron-model-cache-lookup-inferentia-only) for each model.
- to deploy models that are not cached on the Hugging Face hub, you need to [export](https://huggingface.co/docs/optimum-neuron/main/en/guides/export_model)
 them beforehand.

## Setup

The easiest way to use the `optimum-neuron` vLLM platform is to launch an Amazon ec2 instance using
 the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2).  If you decide NOT to make your life easier by using [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2), you can install this functionality into your Neuron environment with ```pip install optimum-neuron[neuronx,vllm]```.

Note: Trn2 instances are not supported by the `optimum-neuron` platform yet.

- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html) to connect to the instance
- Once inside your instance, activate the pre-installed `optimum-neuron` virtual environment by running

```console
source /opt/aws_neuronx_venv_pytorch_2_7/bin/activate
```

## Generating content programmatically

The easiest way to test a model is to use the python API:

```python
from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

llm = LLM(model="unsloth/Llama-3.2-1B-Instruct",
          max_num_seqs=4,
          max_model_len=4096,
          tensor_parallel_size=2)

outputs = llm.generate(prompts, sampling_params)

for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```

## Serving a model

The easiest way to serve a model is to use the `optimum-cli`:

```console
optimum-cli neuron serve --model=<model_name_or_path>
```

The model can be a pre-exported neuron model or a standard hub model.

When deploying a standard hub model, you can customize the way it will be exported:

```console
optimum-cli neuron serve \
    --model="unsloth/Llama-3.1-1B-Intruct" \
    --batch_size=4 \
    --sequence_length=4096 \
    --tensor_parallel_size=2 \
    --dtype="bfloat16"
```

Note: by default `optimum-cli` will only `serve` standard models for which a cached configuration exists.
This behaviour can be overridden using the `--allow_non_cached_model` argument.

If you omit one parameter, `optimum-neuron` will select a default value for you based
on the deployment target and prioritizing cached configurations.

Use the following command to test the model:

```console
curl 127.0.0.1:8080/v1/completions \
    -H 'Content-Type: application/json' \
    -X POST \
    -d '{"prompt":"One of my fondest memory is", "temperature": 0.8, "max_tokens":128}'
```

## Custom deployment for advanced users

You can also launch an Open AI compatible inference server directly using vLLM entry points:

```console
python -m vllm.entrypoints.openai.api_server \
    --model="unsloth/Llama-3.2-1B-Instruct" \
    --max-num-seqs=4 \
    --max-model-len=4096 \
    --tensor-parallel-size=2 \
    --port=8080
```

### Inference pipelines with AWS Neuron (Inf2/Trn1)
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/pipelines.md

# Inference pipelines with AWS Neuron (Inf2/Trn1)

The `pipeline()` function makes it simple to use models from the [Model Hub](https://huggingface.co/models)
for accelerated inference on a variety of tasks such as text classification, question answering and image classification.

<Tip>

You can also use the
[pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines) function from
Transformers and provide your NeuronModel model class.

</Tip>

Currently the supported tasks are:

* `feature-extraction`
* `fill-mask`
* `text-classification`
* `token-classification`
* `question-answering`
* `text-generation`
* `image-classification`
* `image-segmentation`
* `object-detection`
* `automatic-speech-recognition`
* `audio-classification`

## Optimum pipeline usage

While each task has an associated pipeline class, it is simpler to use the general `pipeline()` function which wraps all the task-specific pipelines in one object.
The `pipeline()` function automatically loads a default model and tokenizer/feature-extractor capable of performing inference for your task.

1. Start by creating a pipeline by specifying an inference task:

```python
>>> from optimum.neuron.pipelines import pipeline

>>> classifier = pipeline(task="text-classification")
```

2. Pass your input text/image to the `pipeline()` function:

```python
>>> classifier("I like you. I love you.")
[{'label': 'POSITIVE', 'score': 0.9998838901519775}]
```

_Note: The default models used in the `pipeline()` function are not optimized for inference or quantized, so there won't be a performance improvement compared to their PyTorch counterparts._

### Using vanilla Transformers model and converting to AWS Neuron

The `pipeline()` function accepts any supported model from the [Hugging Face Hub](https://huggingface.co/models).
There are tags on the Model Hub that allow you to filter for a model you'd like to use for your task.

<Tip>

To be able to load the model with the Neuron Runtime, the export to neuron needs
to be supported for the considered architecture.

You can check the list of supported architectures
[here](../package_reference/configuration#supported-architectures).

</Tip>

Once you have picked an appropriate model, you can create the `pipeline()` by specifying the model repo:

```python
>>> from optimum.neuron.pipelines import pipeline

# The model will be loaded to an NeuronModelForQuestionAnswering.
>>> neuron_qa = pipeline("question-answering", model="deepset/roberta-base-squad2", export=True)
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = neuron_qa(question=question, context=context)
```

It is also possible to load it with the `from_pretrained(model_name_or_path, export=True)`
method associated with the `NeuronModelForXXX` class.

For example, here is how you can load the `~neuron.NeuronModelForQuestionAnswering` class for question answering:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForQuestionAnswering, pipeline

>>> tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")

>>> # Loading the PyTorch checkpoint and converting to the neuron format by providing export=True
>>> model = NeuronModelForQuestionAnswering.from_pretrained(
...     "deepset/roberta-base-squad2",
...     export=True
... )

>>> neuron_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = neuron_qa(question=question, context=context)
```

### Defining Input Shapes

NeuronModels currently require static `input_shapes` to run inference. The default input shapes will be used if you are not providing input shapes when providing the `export=True` parameter.
Below is an example of how to specify the input shapes for the sequence length and batch size.

```python
>>> from optimum.neuron.pipelines import pipeline

>>> input_shapes = {"batch_size": 1, "sequence_length": 64}
>>> clt = pipeline("token-classification", model="dslim/bert-base-NER", export=True,input_shapes=input_shapes)
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = clt(context)
```

### Distributed Training with `optimum-neuron`
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/distributed_training.md

# Distributed Training with `optimum-neuron`

AWS Trainium instances provide powerful infrastructure for training large language models at scale. A `trn1.32xlarge` instance contains 16 Neuron devices with 32 cores total, offering 512GB of memory (16GB per core).

However, training large models presents a fundamental challenge: by default, each Neuron core operates as an independent data-parallel worker, requiring the entire model, gradients, and optimizer state (approximately 4× the model size) to fit within a single core's 16GB memory limit, with additional space needed for activations.

For models that exceed these memory constraints, `optimum-neuron` provides sophisticated parallelism strategies that distribute computation and memory across multiple devices, enabling you to train models that would be impossible to fit on individual cores:

## Parallelism Strategies Overview

### 1. ZeRO-1 (Optimizer State Sharding)
[ZeRO-1](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html) is an optimizer-level optimization that reduces memory usage without changing your model architecture.

**How it works**: Shards the optimizer state (gradients, momentum, variance) across data-parallel ranks instead of replicating it on each device.

**Memory savings**: Reduces optimizer memory usage by `1/data_parellel_size`.

**When to use**: Always beneficial when training with multiple devices, regardless of model size.

### 2. Tensor Parallelism (Intra-layer Model Parallelism)
[Tensor Parallelism](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/neuronx-distributed/tensor_parallelism_overview.html) splits individual model layers across multiple devices.

**How it works**: Shards matrix multiplications (linear layers, attention) along rows or columns across devices. Each device computes part of each layer, requiring communication between devices for each forward/backward pass.

**Memory savings**: Reduces model parameter memory by `1/tensor_parallel_size`.

**When to use**: When your model is too large to fit on a single device, even after applying ZeRO-1.

**Typical deployment**: Usually applied within a single node (intra-node) due to high communication requirements.

**Trade-offs**: Increases communication overhead between devices, which can slow down training if overused.

### 3. Sequence Parallelism (Activation Sharding)
[Sequence parallelism](https://arxiv.org/pdf/2205.05198.pdf) is an optimization that works alongside Tensor Parallelism to further reduce memory usage.

**How it works**: Shards activations along the sequence dimension in regions where tensors are not already sharded by tensor parallelism.

**Memory savings**: Reduces activation memory proportional to sequence length, especially beneficial for long sequences.

**When to use**: Always enable when using tensor parallelism - it provides additional memory savings with minimal overhead.

**Requirement**: Only works in combination with tensor parallelism.

### 4. Pipeline Parallelism (Inter-layer Model Parallelism)
[Pipeline Parallelism](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/neuronx-distributed/pipeline_parallelism_overview.html) splits model layers across different devices.

**How it works**: Divides your model into stages, with each stage containing consecutive layers running on different devices. Uses microbatching to keep all devices busy.

**Memory savings**: Reduces model parameter memory by `1/pipeline_parallel_size`.

**When to use**: For very large models that don't fit even with tensor parallelism, or when you want to scale across many devices with less communication overhead than tensor parallelism.

**Typical deployment**: Usually applied across multiple nodes (inter-node) to scale to larger numbers of devices while minimizing high-bandwidth communication requirements.

**Trade-offs**: Introduces pipeline bubbles (idle time) and requires careful tuning of microbatch sizes.

The good news is that it is possible to combine those techniques, and `optimum-neuron` makes it very easy!

<Tip>

All the training examples in the optimum-neuron repo use these parallelism features via the `NeuronTrainer`.

</Tip>

## How to enable ZeRO-1?

ZeRO-1 can be enabled either through the `NeuronTrainer` or directly with the `NeuronAccelerator`.

### Via the `NeuronTrainer`

```python
from optimum.neuron import NeuronTrainingArguments, NeuronTrainer

# Enable ZeRO-1 in the training arguments
training_args = NeuronTrainingArguments(
    output_dir="./output",
    per_device_train_batch_size=1,
    zero_1=True,  # Enable ZeRO-1
    bf16=True,
    # ... other training arguments
)

trainer = NeuronTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
)

trainer.train()
```

<Tip>

Since the example scripts use the `NeuronTrainer`, you can enable ZeRO-1 when using them by adding the `--zero_1` flag to your command line.

For example:

```bash
torchrun --nproc_per_node=2 examples/training/qwen3/finetune_qwen3.py \
    --model_name_or_path Qwen/Qwen2.5-0.5B \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_train \
    --per_device_train_batch_size 1 \
    --block_size 1024 \
    --bf16 \
    --zero_1 \
    --tensor_parallel_size 2 \
    --output_dir my_training/
```

</Tip>

### Via the `NeuronAccelerator`

When using the `NeuronAccelerator` directly, you need to create a `TrainingNeuronConfig` and enable ZeRO-1 separately:

```python
from torch.optim import AdamW
from optimum.neuron import NeuronAccelerator
from optimum.neuron.models.training.config import TrainingNeuronConfig

# Create the training configuration
trn_config = TrainingNeuronConfig()

# Create accelerator with ZeRO-1 enabled
accelerator = NeuronAccelerator(
    trn_config=trn_config,
    zero_1=True,  # Enable ZeRO-1
    mixed_precision="bf16",
)

model = ...  # Your model instance
optimizer = AdamW(model.parameters(), lr=5e-5)

# Prepare model and optimizer
model, optimizer = accelerator.prepare(model, optimizer)
```

## How to enable Tensor Parallelism?

Tensor Parallelism can be used with either the `NeuronTrainer` or `NeuronAccelerator`.

**Important**: Tensor parallelism requires models that have a custom modeling implementation in `optimum.neuron.models.training`.

When doing Tensor Parallelism, you have several important settings:
  1. The `tensor_parallel_size`: Ideally it should be the smallest value for which the model fits in memory.
  2. Whether sequence parallelism should be enabled: [Sequence parallelism](https://arxiv.org/pdf/2205.05198.pdf) shards the activations on the sequence axis outside of the tensor parallel regions, saving memory by sharding the activations.

When using distributed training, the training script is called by `torchrun`, which will dispatch it to workers, one worker per core. Each worker will load the sharded model and dispatch the parameters automatically across the cores. The `tensor_parallel_size` is the number of workers to shard the model parameters on.

### Via the `NeuronTrainer`

```python
from optimum.neuron import NeuronTrainingArguments, NeuronTrainer

# Configure tensor parallelism in training arguments
training_args = NeuronTrainingArguments(
    output_dir="./output",
    per_device_train_batch_size=1,
    bf16=True,
    tensor_parallel_size=8,
    # ... other training arguments
)

trainer = NeuronTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
)

trainer.train()
```

<Tip>

Since the example scripts use the `NeuronTrainer`, you can enable Tensor Parallelism when using them by specifying the `--tensor_parallel_size` argument.

For example:

```bash
torchrun --nproc_per_node=8 examples/training/qwen3/finetune_qwen3.py \
    --model_name_or_path Qwen/Qwen2.5-0.5B \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_train \
    --per_device_train_batch_size 1 \
    --block_size 1024 \
    --bf16 \
    --tensor_parallel_size 8 \
    --output_dir my_training/
```

</Tip>

### Via the `NeuronAccelerator`

When using the `NeuronAccelerator` directly, you configure tensor parallelism through the `TrainingNeuronConfig`:

```python
from torch.optim import AdamW
from optimum.neuron import NeuronAccelerator
from optimum.neuron.models.training.config import TrainingNeuronConfig

# Configure tensor parallelism
trn_config = TrainingNeuronConfig(
    tensor_parallel_size=8,
    sequence_parallel_enabled=True,
    checkpoint_dir=None,  # Can be specified when resuming from checkpoint
)

accelerator = NeuronAccelerator(
    trn_config=trn_config,
    mixed_precision="bf16",
)

model = ...  # Your model instance
optimizer = AdamW(model.parameters(), lr=5e-5)

model, optimizer = accelerator.prepare(model, optimizer)
```

## How to enable Pipeline Parallelism?

Pipeline Parallelism allows you to split your model layers across multiple devices, enabling training of very large models that wouldn't fit on a single device, or even a signle node.

**Important**: Pipeline parallelism requires models that have a custom modeling implementation in `optimum.neuron.models.training` and declare `SUPPORTS_PIPELINE_PARALLELISM = True`.

### Configuration Options

Pipeline parallelism has several configuration parameters:

- `pipeline_parallel_size`: Number of pipeline stages (devices to split layers across)
- `pipeline_parallel_num_microbatches`: Number of microbatches for pipeline scheduling
- When pipeline parallelism is enabled, ZeRO-1 can be automatically applied to the pipeline parallel optimizer

### Via the `NeuronTrainer`

```python
from optimum.neuron import NeuronTrainingArguments, NeuronTrainer
from optimum.neuron.models.training import LlamaForCausalLM  # Custom model implementation

# Configure pipeline parallelism in training arguments
training_args = NeuronTrainingArguments(
    output_dir="./output",
    per_device_train_batch_size=4,  # Will be split into microbatches
    bf16=True,
    tensor_parallel_size=2,
    pipeline_parallel_size=4,                    # Split model across 4 pipeline stages
    pipeline_parallel_num_microbatches=4,        # Number of microbatches
    zero_1=True,                                 # Enable ZeRO-1 with pipeline parallelism
    # ... other training arguments
)

# Load model using custom implementation - must be done with the model class directly
model = LlamaForCausalLM.from_pretrained(
    "meta-llama/Llama-3.2-3B",
    trn_config=training_args.trn_config  # Pass the auto-generated trn_config
)

trainer = NeuronTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
)

trainer.train()
```

### Via the `NeuronAccelerator`

```python
from optimum.neuron import NeuronAccelerator
from optimum.neuron.models.training.config import TrainingNeuronConfig
from optimum.neuron.models.training import LlamaForCausalLM
from torch.optim import AdamW

# Configure combined parallelism strategies
trn_config = TrainingNeuronConfig(
    tensor_parallel_size=2,
    pipeline_parallel_size=4,
    pipeline_parallel_num_microbatches=4,
    sequence_parallel_enabled=True,
)

accelerator = NeuronAccelerator(
    trn_config=trn_config,
    zero_1=True,  # Can combine with ZeRO-1
    mixed_precision="bf16",
)

# Load model with custom implementation
model = LlamaForCausalLM.from_pretrained(
    "meta-llama/Llama-3.2-3B",
    trn_config=trn_config
)

optimizer = AdamW(model.parameters(), lr=5e-5)
model, optimizer = accelerator.prepare(model, optimizer)
```

<Tip>

When using pipeline parallelism, the total number of processes should be at least `tensor_parallel_size * pipeline_parallel_size`. For example, with `tensor_parallel_size=2` and `pipeline_parallel_size=4`, you need 8 processes total.

</Tip>

## Combining Parallelism Strategies

You can combine multiple parallelism strategies for maximum memory efficiency and performance. Here's an example with all strategies combined:

### Via the `NeuronTrainer`

```python
from optimum.neuron import NeuronTrainingArguments, NeuronTrainer
from optimum.neuron.models.training import LlamaForCausalLM

# Example: Combine all parallelism strategies
training_args = NeuronTrainingArguments(
    output_dir="./output",
    per_device_train_batch_size=32,
    bf16=True,
    gradient_checkpointing=True,

    # ZeRO-1
    zero_1=True,

    # Tensor parallelism
    tensor_parallel_size=4,
    disable_sequence_parallel=False,     # Enable sequence parallelism

    # Pipeline parallelism
    pipeline_parallel_size=2,
    pipeline_parallel_num_microbatches=8,

    # Additional optimizations
    fuse_qkv=True,                      # Fuse QKV projections for efficiency
    kv_size_multiplier=None,            # Auto-calculate optimal KV multiplier
)

# Load model using custom implementation
model = LlamaForCausalLM.from_pretrained(
    "meta-llama/Llama-3.2-3B",
    trn_config=training_args.trn_config
)

trainer = NeuronTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
)

trainer.train()
```

This configuration uses 4 * 2 = 8 total processes:
- Each tensor parallel group has 4 processes
- Each pipeline stage runs on one tensor parallel group

We can then run the training script on the `trn1.32xlarge` instance with 32 Neuron cores, resulting in the following configuration: `dp=4, tp=4, pp=2`, which means 4 data-parallel groups, each with 4 tensor-parallel devices, and 2 pipeline stages.

## Checkpoint consolidation

Since distributed training uses sharded checkpoints across different workers, you need to consolidate them to create a standard model checkpoint that can be shared and used outside of the specific training configuration.

The Optimum CLI provides a way of doing that very easily via the `optimum neuron consolidate` command:

```bash
optimum-cli neuron consolidate --help

usage: optimum-cli neuron consolidate [-h] [-f {pytorch,safetensors}] checkpoint_dir output_dir

positional arguments:
  checkpoint_dir        The path to the directory containing the checkpoints.
  output_dir            The path to the output directory containing the consolidated checkpoint.

optional arguments:
  -h, --help            show this help message and exit
  -f {pytorch,safetensors}, --format {pytorch,safetensors}
                        The format used to save the consolidated checkpoint.
```

All you need to do is specify the sharded checkpoints directory and the output directory that will contain the consolidated checkpoints, and the command takes care of the rest.
It is also possible to specify the output format of the consolidated checkpoints. By default it will export them to the `safetensors` format, which is the recommended format to use.

Example:

Training with distributed parallelism just completed and the output dir is called `my_training`. The directory looks like the following:

```bash
my_training/
├── README.md
├── all_results.json
├── checkpoint-10
│   ├── config.json
│   ├── scheduler.pt
│   ├── special_tokens_map.json
│   ├── shards/
│   ├── tokenizer.json
│   ├── tokenizer.model
│   ├── tokenizer_config.json
│   ├── trainer_state.json
│   └── training_args.bin
├── config.json
├── special_tokens_map.json
├── shards/
│   ├── tp_rank_00_pp_rank_00
│   ├── tp_rank_01_pp_rank_00
│   ├── tp_rank_02_pp_rank_00
│   ├── tp_rank_03_pp_rank_00
│   ├── tp_rank_00_pp_rank_01
│   ├── tp_rank_01_pp_rank_01
│   ├── tp_rank_02_pp_rank_01
│   └── tp_rank_03_pp_rank_01
├── tokenizer.json
├── tokenizer.model
├── tokenizer_config.json
├── train_results.json
├── trainer_state.json
├── training_args.bin
└── trn_config.json
```

You can consolidate the sharded checkpoints in `my_training/shards`, which correspond to the sharded checkpoints saved at the end of training, by running the following command:

```bash
optimum-cli neuron consolidate my_training my_training_consolidated_checkpoint
```

<Tip>

The sharded checkpoints are saved under a directory called `shards`. The `optimum-cli neuron consolidate` command accepts as input both a directory that contains a `shards` directory, or the `shards` directory itself.

</Tip>

## Best Practices

### Choosing Parallelism Strategy

1. **Start with Tensor Parallelism**: Use the smallest `tensor_parallel_size` that fits your model in memory
2. **Add Pipeline Parallelism**: For very large models, combine with pipeline parallelism
3. **Enable Sequence Parallelism**: Always enable when using tensor parallelism for memory savings (set `disable_sequence_parallel=False`)
4. **Use ZeRO-1**: Combine with any parallelism strategy for optimizer memory savings

### Memory Optimization

- Enable `gradient_checkpointing` for large models
- Set appropriate `pipeline_parallel_num_microbatches` for pipeline parallelism

## Troubleshooting

### Common Issues

1. **Out of Memory**: Reduce batch size, increase parallelism, or enable gradient checkpointing
2. **Model Not Supported**: Ensure you're using a model from `optimum.neuron.models.training`
3. **Pipeline Parallelism Fails**: Check that the model supports pipeline parallelism
4. **Incorrect Process Count**: Ensure `nproc_per_node` matches your parallelism configuration

### Debugging Tips

- Start with smaller models and parallelism sizes
- Check that all processes can communicate properly
- Verify checkpoint directories and permissions
- Monitor Neuron device utilization

### NeuronX Text-generation-inference for AWS inferentia2
https://huggingface.co/docs/optimum.neuron/v0.4.0/guides/neuronx_tgi.md

# NeuronX Text-generation-inference for AWS inferentia2

Text Generation Inference ([TGI](https://huggingface.co/docs/text-generation-inference/)) is a toolkit for deploying and serving Large Language Models (LLMs).

A [neuron backend](https://huggingface.co/docs/text-generation-inference/en/backends/neuron) allows to deploy TGI for Trainium and Inferentia chips.

### 🚀  Tutorials: How To Fine-tune & Run LLMs
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_llms_overview.md

# 🚀  Tutorials: How To Fine-tune & Run LLMs

Learn how to run and fine-tune models for optimal performance with AWS Trainium.
<div style="display: grid !important; grid-template-columns: repeat(auto-fit, minmax(400px, 1fr)) !important; gap: 24px !important; margin: 32px -32px !important; padding: 0 32px !important; max-width: none !important; width: calc(100% + 64px) !important;">

<a href="./finetune_llama" style="text-decoration: none !important; color: inherit !important; display: block !important;">
<div style="border-radius: 8px !important; background-color: white !important; box-shadow: 0 2px 8px rgba(0,0,0,0.1) !important; transition: all 0.3s ease-in-out !important; cursor: pointer !important; display: flex !important; flex-direction: column !important; height: 280px !important; width: 100% !important; border: none !important;"
onmouseover="this.style.transform='translateY(-4px)'; this.style.boxShadow='0 8px 24px rgba(0,0,0,0.15)'; this.querySelector('.card-content').style.backgroundPosition='0% 0'" onmouseout="this.style.transform='translateY(0)'; this.style.boxShadow='0 2px 8px rgba(0,0,0,0.1)'; this.querySelector('.card-content').style.backgroundPosition='100% 0'">
  <div style="height: 180px !important; width: 100% !important; background: linear-gradient(135deg, #FF6B35 0%, #F7931E 100%) !important; display: flex !important; align-items: center !important; justify-content: center !important; position: relative !important; border: none !important; margin: 0 !important; padding: 0 !important; border-radius: 8px 8px 0 0 !important; overflow: hidden !important;">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/optimum/neuron/training_tutorials/llama-logo.png" alt="Llama 3" style="width: 100% !important; height: 100% !important; object-fit: cover !important; border: none !important; border-radius: 8px 8px 0 0 !important;" onerror="this.outerHTML='<div style=\'color: white; font-size: 48px; font-weight: bold; z-index: 10;
position: relative;\'>🦙</div>'"/>
  </div>
  <div class="card-content" style="padding: 20px !important; background: white !important;
background-image: linear-gradient(90deg, transparent 0%, #f8f9fa 50%, #e9ecef 100%) !important; background-size: 200% 100% !important; background-position: 100% 0 !important;
transition: background-position 0.4s ease-out !important; width: 100% !important; box-sizing: border-box !important; flex: 1 !important; border: none !important; margin: 0 !important;
border-radius: 0 0 8px 8px !important;">
    <h3 style="margin: 0 0 8px 0 !important; font-size: 18px !important;
font-weight: 600 !important; color: #24292e !important;">
      Llama 3.1
    </h3>
    <p style="margin: 0 !important;
font-size: 14px !important; color: #586069 !important; line-height: 1.4 !important;">
      Instruction Fine-tuning of Llama 3.1 8B with LoRA on the Dolly dataset
    </p>
  </div>
</div>
</a>

<a href="./finetune_qwen3" style="text-decoration: none !important;
color: inherit !important; display: block !important;">
<div style="border-radius: 8px !important; background-color: white !important; box-shadow: 0 2px 8px rgba(0,0,0,0.1) !important;
transition: all 0.3s ease-in-out !important; cursor: pointer !important; display: flex !important; flex-direction: column !important; height: 280px !important; width: 100% !important;
border: none !important;" onmouseover="this.style.transform='translateY(-4px)'; this.style.boxShadow='0 8px 24px rgba(0,0,0,0.15)'; this.querySelector('.card-content').style.backgroundPosition='0% 0'" onmouseout="this.style.transform='translateY(0)'; this.style.boxShadow='0 2px 8px rgba(0,0,0,0.1)';
this.querySelector('.card-content').style.backgroundPosition='100% 0'">
  <div style="height: 180px !important; width: 100% !important; background: linear-gradient(135deg, #faf5ff 0%, #f3e8ff 100%) !important; display: flex !important;
align-items: center !important; justify-content: center !important; position: relative !important; border: none !important; margin: 0 !important; padding: 0 !important;
border-radius: 8px 8px 0 0 !important; overflow: hidden !important;">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/optimum/neuron/training_tutorials/qwen3-logo.png" alt="Qwen3" style="width: 100% !important;
height: 100% !important; object-fit: cover !important; border: none !important; border-radius: 8px 8px 0 0 !important;" onerror="this.outerHTML='<div style=\'color: white; font-size: 48px;
font-weight: bold; z-index: 10; position: relative;\'>🔷</div>'"/>
  </div>
  <div class="card-content" style="padding: 20px !important; background: white !important;
background-image: linear-gradient(90deg, transparent 0%, #f8f9fa 50%, #e9ecef 100%) !important; background-size: 200% 100% !important; background-position: 100% 0 !important;
transition: background-position 0.4s ease-out !important; width: 100% !important; box-sizing: border-box !important; flex: 1 !important; border: none !important; margin: 0 !important;
border-radius: 0 0 8px 8px !important;">
    <h3 style="margin: 0 0 8px 0 !important; font-size: 18px !important;
font-weight: 600 !important; color: #24292e !important;">
      Qwen3
    </h3>
    <p style="margin: 0 !important;
font-size: 14px !important; color: #586069 !important; line-height: 1.4 !important;">
      Fine-tune Qwen3 8B with LoRA on the Simple Recipes dataset
    </p>
  </div>
</div>
</a>

<a href="./pretraining_hyperpod_llm" style="text-decoration: none !important;
color: inherit !important; display: block !important;">
<div style="border-radius: 8px !important; background-color: white !important; box-shadow: 0 2px 8px rgba(0,0,0,0.1) !important;
transition: all 0.3s ease-in-out !important; cursor: pointer !important; display: flex !important; flex-direction: column !important; height: 280px !important; width: 100% !important;
border: none !important;" onmouseover="this.style.transform='translateY(-4px)'; this.style.boxShadow='0 8px 24px rgba(0,0,0,0.15)'; this.querySelector('.card-content').style.backgroundPosition='0% 0'" onmouseout="this.style.transform='translateY(0)'; this.style.boxShadow='0 2px 8px rgba(0,0,0,0.1)';
this.querySelector('.card-content').style.backgroundPosition='100% 0'">
  <div style="height: 180px !important; width: 100% !important; background: linear-gradient(135deg, #059669 0%, #0891b2 100%) !important; display: flex !important;
align-items: center !important; justify-content: center !important; position: relative !important; border: none !important; margin: 0 !important; padding: 0 !important;
border-radius: 8px 8px 0 0 !important; overflow: hidden !important;">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/optimum/neuron/training_tutorials/sagemaker-logo.png" alt="SageMaker Hyperpod" style="width: 100% !important;
height: 100% !important; object-fit: cover !important; border: none !important; border-radius: 8px 8px 0 0 !important;" onerror="this.outerHTML='<div style=\'color: white; font-size: 28px;
font-weight: bold; text-align: center; z-index: 10; position: relative;\'>☁️<br/>SageMaker</div>'"/>
  </div>
  <div class="card-content" style="padding: 20px !important; background: white !important;
background-image: linear-gradient(90deg, transparent 0%, #f8f9fa 50%, #e9ecef 100%) !important; background-size: 200% 100% !important; background-position: 100% 0 !important;
transition: background-position 0.4s ease-out !important; width: 100% !important; box-sizing: border-box !important; flex: 1 !important; border: none !important; margin: 0 !important;
border-radius: 0 0 8px 8px !important;">
    <h3 style="margin: 0 0 8px 0 !important; font-size: 18px !important;
font-weight: 600 !important; color: #24292e !important;">
      Llama 3.2 on SageMaker
    </h3>
    <p style="margin: 0 !important;
font-size: 14px !important; color: #586069 !important; line-height: 1.4 !important;">
      Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod
    </p>
  </div>
</div>
</a>

</div>

## What you'll learn

These tutorials will guide you through the complete process of fine-tuning large language models on AWS Trainium:

- **📊 Data Preparation**: Load and preprocess datasets for supervised fine-tuning
- **🔧 Model Configuration**: Set up LoRA adapters and distributed training parameters
- **⚡ Training Optimization**: Leverage tensor parallelism, gradient checkpointing, and mixed precision
- **💾 Checkpoint Management**: Consolidate and merge model checkpoints for deployment
- **🚀 Model Deployment**: Export and test your fine-tuned models for inference

Choose the tutorial that best fits your use case and start fine-tuning your LLMs on AWS Trainium today!

### Getting started with AWS Trainium and Hugging Face Transformers
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/fine_tune_bert.md

# Getting started with AWS Trainium and Hugging Face Transformers

*This tutorial is available in two different formats, as [web page](https://huggingface.co/docs/optimum-neuron/training_tutorials/fine_tune_bert) and [notebook version](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/text-classification/fine_tune_bert.ipynb)*.

This guide will help you to get started with [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and Hugging Face Transformers. It will cover how to set up a Trainium instance on AWS, load & fine-tune a transformers model for text-classification.

You will learn how to:

1. Setup AWS environment
2. Load and process the dataset
3. Fine-tune BERT using Hugging Face Transformers and Optimum Neuron

Before we can start, make sure you have a [Hugging Face Account](https://huggingface.co/join) to save artifacts and experiments.

## Quick intro: AWS Trainium

[AWS Trainium (Trn1)](https://aws.amazon.com/de/ec2/instance-types/trn1/) is a purpose-built EC2 for deep learning (DL) training workloads. Trainium is the successor of [AWS Inferentia](https://aws.amazon.com/ec2/instance-types/inf1/?nc1=h_ls) focused on high-performance training workloads claiming up to 50% cost-to-train savings over comparable GPU-based instances.

Trainium has been optimized for training natural language processing, computer vision, and recommender models used. The accelerator supports a wide range of data types, including FP32, TF32, BF16, FP16, UINT8, and configurable FP8.

The biggest Trainium instance, the `trn1.32xlarge` comes with over 500GB of memory, making it easy to fine-tune ~10B parameter models on a single instance. Below you will find an overview of the available instance types. More details [here](https://aws.amazon.com/en/ec2/instance-types/trn1/#Product_details):

| instance size | accelerators | accelerator memory | vCPU | CPU Memory | price per hour |
| --- | --- | --- | --- | --- | --- |
| trn1.2xlarge | 1 | 32 | 8 | 32 | $1.34 |
| trn1.32xlarge | 16 | 512 | 128 | 512 | $21.50 |
| trn1n.32xlarge (2x bandwidth) | 16 | 512 | 128 | 512 | $24.78 |

---

Now we know what Trainium offers, let's get started. 🚀

*Note: This tutorial was created on a trn1.2xlarge AWS EC2 Instance.* 

## 1. Setup AWS environment

In this tutorial, we will use the `trn1.2xlarge` instance on AWS with 1 Accelerator, including two Neuron Cores and the [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2).

Once the instance is up and running, we can ssh into it. But instead of developing inside a terminal we want to use a `Jupyter` environment, which we can use for preparing our dataset and launching the training. For this, we need to add a port for forwarding in the `ssh` command, which will tunnel our localhost traffic to the Trainium instance.

```bash
PUBLIC_DNS="" # IP address, e.g. ec2-3-80-....
KEY_PATH="" # local path to key, e.g. ssh/trn.pem

ssh -L 8080:localhost:8080 -i ${KEY_NAME}.pem ubuntu@$PUBLIC_DNS
```

We need to make sure we have the  `training` extra installed, to get all the necessary dependencies:

```bash
python -m pip install .[training]
```

We can now start our **`jupyter`** server.

```bash
python -m notebook --allow-root --port=8080
```

You should see a familiar **`jupyter`** output with a URL to the notebook.

**`http://localhost:8080/?token=8c1739aff1755bd7958c4cfccc8d08cb5da5234f61f129a9`**

We can click on it, and a **`jupyter`** environment opens in our local browser.

![jupyter.webp](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/optimum/neuron/tutorial-fine-tune-bert-jupyter.png)

We are going to use the Jupyter environment only for preparing the dataset and then `torchrun` for launching our training script on both neuron cores for distributed training. Lets create a new notebook and get started. 

## 2. Load and process the dataset

We are training a Text Classification model on the [emotion](https://huggingface.co/datasets/dair-ai/emotion) dataset to keep the example straightforward. The `emotion` is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.

We will use the `load_dataset()` method from the [🤗 Datasets](https://huggingface.co/docs/datasets/index) library to load the `emotion`.

```python
from datasets import load_dataset


# Dataset id from huggingface.co/dataset
dataset_id = "dair-ai/emotion"

# Load raw dataset
raw_dataset = load_dataset(dataset_id)

print(f"Train dataset size: {len(raw_dataset['train'])}")
print(f"Test dataset size: {len(raw_dataset['test'])}")

# Train dataset size: 16000
# Test dataset size: 2000
```

Let’s check out an example of the dataset.

```python
from random import randrange


random_id = randrange(len(raw_dataset["train"]))
raw_dataset["train"][random_id]
# {'text': 'i also like to listen to jazz whilst painting it makes me feel more artistic and ambitious actually look to the rainbow', 'label': 1}
```

We must convert our "Natural Language" to token IDs to train our model. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary). if you want to learn more about this, out [chapter 6](https://huggingface.co/course/chapter6/1?fw=pt) of the [Hugging Face Course](https://huggingface.co/course/chapter1/1).

In order to avoid graph recompilation, inputs should have a fixed shape. We need to truncate or pad all samples to the same length.

```python
import os

from transformers import AutoTokenizer


# Model id to load the tokenizer
model_id = "bert-base-uncased"

# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)


# Tokenize helper function
def tokenize(batch):
    return tokenizer(batch["text"], padding="max_length", truncation=True, return_tensors="pt")


def tokenize_function(example):
    return tokenizer(
        example["text"],
        padding="max_length",
        truncation=True,
    )


# Tokenize dataset
tokenized_emotions = raw_dataset.map(tokenize, batched=True, remove_columns=["text"])
```

## 3. Fine-tune BERT using Hugging Face Transformers

We can use the **[Trainer](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Trainer)** and **[TrainingArguments](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments)** to fine-tune PyTorch-based transformer models.

We prepared a simple [train.py](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/text-classification/scripts/train.py) training script to perform training and evaluation on the dataset. Below is an excerpt:


```python
from transformers import Trainer, TrainingArguments

def parse_args():
	...

def training_function(args):

    ...

    # Download the model from huggingface.co/models
    model = AutoModelForSequenceClassification.from_pretrained(
        args.model_id, num_labels=num_labels, label2id=label2id, id2label=id2label
    )

    training_args = TrainingArguments(
			...
    )

    # Create Trainer instance
    trainer = Trainer(
        model=model,
        args=training_args,
        train_dataset=tokenized_emotions["train"],
        eval_dataset=tokenized_emotions["validation"],
        processing_class=tokenizer,
    )


    # Start training
    trainer.train()
```

We can load the training script into our environment using the `wget` command or manually copy it into the notebook from [here](https://github.com/huggingface/optimum-neuron/blob/notebooks/text-classification/scripts/train.py).

```python
!wget https://raw.githubusercontent.com/huggingface/optimum-neuron/main/notebooks/text-classification/scripts/train.py
```

We will use `torchrun` to launch our training script on both neuron cores for distributed training, thus allowing data parallelism. `torchrun` is a tool that automatically distributes a PyTorch model across multiple accelerators. We can pass the number of accelerators as `nproc_per_node` arguments alongside our hyperparameters.

We'll use the following command to launch  training:

```python
!torchrun --nproc_per_node=2 train.py \
 --model_id bert-base-uncased \
 --lr 5e-5 \
 --per_device_train_batch_size 8 \
 --bf16 True \
 --epochs 3
```

After compilation, it will only take few minutes to complete the training.

```python
***** train metrics *****
  epoch                    =        3.0
  eval_loss                =     0.1761
  eval_runtime             = 0:00:03.73
  eval_samples_per_second  =    267.956
  eval_steps_per_second    =     16.881
  total_flos               =  1470300GF
  train_loss               =     0.2024
  train_runtime            = 0:07:27.14
  train_samples_per_second =     53.674
  train_steps_per_second   =      6.709

```



Last but not least, terminate the EC2 instance to avoid unnecessary charges. Looking at the price-performance, our training only costs **`20ct`** (**`1.34$/h * 0.13h = 0.18$`**)

### 🚀 Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod with Pre-built Containers
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/pretraining_hyperpod_llm.md

# 🚀 Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod with Pre-built Containers

This tutorial demonstrates how to continuously pre-train the [Llama 3.2 1B](https://huggingface.co/meta-llama/Llama-3.2-1B) model using the Hugging Face [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index) library on [Amazon SageMaker Hyperpod](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod.html). We leverage several performance optimizations such as tensor parallelism, sequence parallelism, and ZeRO-1 to efficiently train large language models on Trainium-powered instances.

One of the key benefits of using SageMaker Hyperpod is the ability to leverage the pre-built Optimum Neuron containers provided by Hugging Face. These containers come with all the necessary libraries and dependencies pre-installed, making it easy to get started with training on AWS Trainium instances.

By using the SageMaker pre-built containers, you can avoid the hassle of manually setting up the environment and focus on the core training and fine-tuning tasks. The containers are optimized for performance and include various optimization techniques, such as tensor parallelism and selective checkpointing, to efficiently train large language models like Llama 3.2 1B.

You will learn how to:

- [Continuous Pretraining of Llama 3.2 1B on SageMaker Hyperpod with Pre-built Containers](#continuous-pretraining-of-llama-32-1b-on-sagemaker-hyperpod-with-pre-built-containers)
  - [1. Setup AWS Environment](#1-setup-aws-environment)
  - [2. Prepare the Training Environment](#2-prepare-the-training-environment)
  - [3. Configure the Training Job](#3-configure-the-training-job)
  - [4. Launch Training on SageMaker Hyperpod](#4-launch-training-on-sagemaker-hyperpod)
  - [5. Monitor and Validate Training](#5-monitor-and-validate-training)

## 1. Setup AWS Environment

Before starting this tutorial, you need to set up your AWS environment:

1. Create an AWS SageMaker Hyperpod cluster with at least one `trn1.32xlarge` instance. You can follow the [Hyperpod EKS workshop](https://catalog.workshops.aws/sagemaker-hyperpod-eks/en-US/00-setup/own-account) to set up the cluster.
2. Since Llama 3.2 is a gated model users have to register in Hugging Face and obtain an [access token](https://huggingface.co/docs/hub/en/security-tokens) before running this example. You will also need to review and accept the license agreement on the [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) model page.
3. Configure your AWS credentials. If you haven't already set up your AWS credentials, you can do this by installing the AWS CLI and running `aws configure`. You'll need to enter your AWS Access Key ID, Secret Access Key, default region, and output format.
   ```bash
   aws configure
   AWS Access Key ID [None]: YOUR_ACCESS_KEY
   AWS Secret Access Key [None]: YOUR_SECRET_KEY
   Default region name [None]: YOUR_REGION
   Default output format [None]: json
   ```

## 2. Prepare the Training Environment

Set up your training environment with the necessary dependencies:

```bash
git clone https://github.com/huggingface/optimum-neuron.git
mkdir ~/pre-training
cd pre-training

cp -r ../optimum-neuron/docs/source/training_tutorials/amazon_eks .
cd amazon_eks
```

Login to ECR and pull the `huggingface-pytorch-training-neuronx` image:

```bash
region=us-east-1
dlc_account_id=************
aws ecr get-login-password --region $region | docker login --username AWS --password-stdin $dlc_account_id.dkr.ecr.$region.amazonaws.com

docker pull ${dlc_account_id}.dkr.ecr.${region}.amazonaws.com/huggingface-pytorch-training-neuronx:2.1.2-transformers4.43.2-neuronx-py310-sdk2.20.0-ubuntu20.04-v1.0
```

Build and push the Docker image to your ECR registry:

```bash
export AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
export ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
export REGISTRY=${ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/
export IMAGE=optimum-neuron-llama-pretraining
export TAG=:latest

docker build -t ${REGISTRY}${IMAGE}${TAG} .
```

Push the image to your private registry:

```bash
# Create registry if needed
export REGISTRY_COUNT=$(aws ecr describe-repositories | grep \"${IMAGE}\" | wc -l)
if [ "${REGISTRY_COUNT//[!0-9]/}" == "0" ]; then
   echo "Creating repository ${REGISTRY}${IMAGE} ..."
   aws ecr create-repository --repository-name ${IMAGE}
else
   echo "Repository ${REGISTRY}${IMAGE} already exists"
fi

# Login to registry
echo "Logging in to $REGISTRY ..."
aws ecr get-login-password | docker login --username AWS --password-stdin $REGISTRY

# Push image to registry
docker image push ${REGISTRY}${IMAGE}${TAG}
```

## 3. Configure the Training Job

Next, you will generate the script to be used by the pre-training job. Begin by logging into Hugging Face using your access token mentioned in the prerequisite steps.
Modify the `generate-jobspec.sh` script to include the Hugging Face access token before running it:

```bash
export HF_ACCESS_TOKEN="<your_HF_token_here>"
```

Generate the Kubernetes job specification by executing `generate-jobspec.sh`. This will create a deployment manifest called `llama_train.yaml` for the Amazon SageMaker Hyperpod EKS cluster.

```bash
./generate-jobspec.sh
```

## 4. Launch Training on SageMaker Hyperpod

Deploy the training job to your Kubernetes cluster:

```bash
kubectl apply -f llama_train.yaml
```

The manifest runs the training script on the cluster using torchrun for distributed training. You can explore the complete training script at [run_clm.py](https://github.com/huggingface/optimum-neuron/blob/main/examples/language-modeling/run_clm.py).

You will use the following distributed training techniques in this script:
- Distributed Training: Uses torchrun with 8 processes per node for efficient multi-device training
- Model Parallelism: Implements both tensor parallelism (TP=8) and pipeline parallelism (PP=1)
- Mixed Precision: Utilizes BFloat16 for improved training efficiency
- Gradient Checkpointing: Enables memory-efficient training

The manifest runs the following command on the cluster. The environment variables are set when creating the manifest in `generate-jobspec.sh`.

```bash
torchrun --nproc_per_node=8 --nnodes=${NUM_NODES} run_clm.py \
    --model_name_or_path=${HF_MODEL_NAME}
    --token=${HF_ACCESS_TOKEN}
    --dataset_name=${DATASET_NAME}
    --dataset_config_name=${DATASET_CONFIG_NAME}
    --streaming=True
    --cache_dir=${TOKENIZED_DATA_PATH}
    --num_train_epochs=1
    --do_train
    --learning_rate=1e-4
    --max_steps=${MAX_STEPS}
    --per_device_train_batch_size=${BATCH_SIZE}
    --per_device_eval_batch_size=4
    --gradient_accumulation_steps=1
    --gradient_checkpointing
    --block_size=4096
    --bf16
    --max_grad_norm=1.0
    --lr_scheduler_type=linear
    --tensor_parallel_size=8
    --pipeline_parallel_size=1
    --logging_steps=1
    --save_total_limit=1
    --output_dir=${CHECKPOINT_DIR}
    --overwrite_output_dir
```

The training job will now start running on the SageMaker Hyperpod cluster.

This uses a pre-built script from Optimum-neuron. The script uses the Trainer class from the Optimum Neuron library, which is a specialized version of the Hugging Face Trainer optimized for training on AWS Trainium instances.

Here's an overview of the main components in the script:

   - Model Loading: The model is loaded using `AutoModelForCausalLM.from_pretrained()` with lazy loading for parallelism.

   - Data Processing: The dataset is tokenized and processed into chunks suitable for language modeling.

   - Training Arguments: The script uses `NeuronTrainingArguments` to configure training hyperparameters, including options for tensor parallelism and pipeline parallelism.

   - Trainer Setup: A Trainer instance `[optimum.neuron.NeuronTrainer]` is created with the model, training arguments, datasets, and other necessary components.

   - Training Loop: The `trainer.train()` method is called to start the continuous pretraining process.


## 5. Monitor and Validate Training

You can monitor the progress through Kubernetes logs:

```bash
# Monitor training logs
kubectl logs -f -n kubeflow llama-training-eks-worker-0

# Validate saved checkpoints
kubectl exec -it llama-training-eks-worker-0 -- ls -l /fsx/output
```

Once the pretraining is complete, you can fine-tune the model for specific tasks using the techniques covered in the previous tutorials. Congrats on pre-training Llama on AWS Trainium!

### 🚀 Instruction Fine-Tuning of Llama 3.1 8B with LoRA
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_llama.md

# 🚀 Instruction Fine-Tuning of Llama 3.1 8B with LoRA

This tutorial shows how to fine-tune the Llama 3.1 model on AWS Trainium accelerators using optimum-neuron.

**This is based on the [Llama 3.1 fine-tuning example script](https://github.com/huggingface/optimum-neuron/tree/main/examples/training/llama).**

## 1. 🛠️ Setup AWS Environment

We'll use a `trn1.32xlarge` instance with 16 Trainium Accelerators (32 Neuron Cores) and the Hugging Face Neuron Deep Learning AMI.

The Hugging Face AMI includes all required libraries pre-installed:
- `datasets`, `transformers`, `optimum-neuron`
- Neuron SDK packages
- No additional environment setup needed

To create your instance, follow the guide [here](https://huggingface.co/docs/optimum-neuron/ec2-setup).

**Model Access:** The Llama 3.1 model is gated and requires access approval. You can request access at [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). Once approved, make sure to authenticate with the Hugging Face Hub:

```bash
huggingface-cli login
```

## 2. 📊 Load and Prepare the Dataset

We'll use the [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset, an open source dataset of instruction-following records on categories outlined in the [InstructGPT paper](https://arxiv.org/abs/2203.02155), including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.

```
{
  "instruction": "What is world of warcraft",
  "context": "",
  "response": (
        "World of warcraft is a massive online multi player role playing game. "
        "It was released in 2004 by bizarre entertainment"
    )
}
```

To load the dataset we use the `load_dataset()` method from the `datasets` library.

```python
from random import randrange

from datasets import load_dataset


# Load dataset from the hub
dataset_id = "databricks/databricks-dolly-15k"
dataset = load_dataset(dataset_id, split="train")

dataset_size = len(dataset)
print(f"dataset size: {dataset_size}")
# dataset size: 15011
```

To instruct fine-tune our model we need to convert our structured examples into collection of tasks described via instructions. We define our formatting function to preprocess the dataset.

The dataset should be structured with input-output pairs, where each input is a prompt and the output is the expected response from the model.

```python
def format_dolly(example, tokenizer):
    """Format Dolly dataset examples using the tokenizer's chat template."""
    user_content = example["instruction"]
    if len(example["context"]) > 0:
        user_content += f"\n\nContext: {example['context']}"

    messages = [
        {
            "role": "system",
            "content": "Cutting Knowledge Date: December 2023\nToday Date: 29 Jul 2025\n\nYou are a helpful assistant",
        },
        {"role": "user", "content": user_content},
        {"role": "assistant", "content": example["response"]},
    ]

    return tokenizer.apply_chat_template(messages, tokenize=False)
```

Note: this function is well-defined in the [Python script](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/llama/finetune_llama.py) to run this tutorial.


## 3. 🎯 Fine-tune Llama 3.1 with NeuronSFTTrainer and PEFT

For standard PyTorch fine-tuning, you'd typically use [PEFT](https://github.com/huggingface/peft) with LoRA adapters and the [`SFTTrainer`](https://huggingface.co/docs/trl/en/sft_trainer).

On AWS Trainium, `optimum-neuron` provides `NeuronSFTTrainer` as a drop-in replacement.

**Distributed Training on Trainium:**
Since Llama 3.1 8B doesn't fit on a single accelerator, we use distributed training techniques:
- Data Parallel (DDP)
- Tensor Parallelism  

Model loading and LoRA configuration work similarly to other accelerators.

Combining all the pieces together, and assuming the dataset has already been loaded, we can write the following code to fine-tune Llama 3.1 on AWS Trainium:

```python
model_id = "meta-llama/Llama-3.1-8B"

# Define the training arguments
output_dir = "Llama-3.1-8B-finetuned"
training_args = NeuronTrainingArguments(
    output_dir=output_dir,
    num_train_epochs=3,
    do_train=True,
    max_steps=-1,  # -1 means train until the end of the dataset
    per_device_train_batch_size=1,
    gradient_accumulation_steps=16,
    learning_rate=1e-4,
    bf16=True,  
    tensor_parallel_size=8,
    logging_steps=1,
    warmup_steps=5,
    async_save=True,
    overwrite_output_dir=True,
)

# Load the model with the NeuronModelForCausalLM class.
# It will load the model with a custom modeling specifically designed for AWS Trainium.
trn_config = training_args.trn_config
dtype = torch.bfloat16 if training_args.bf16 else torch.float32
model = NeuronModelForCausalLM.from_pretrained(
    model_id,
    trn_config,
    torch_dtype=dtype,
    # Use FlashAttention2 for better performance and to be able to use larger sequence lengths.
    attn_implementation="flash_attention_2",
)

lora_config = LoraConfig(
    r=64,
    lora_alpha=128,
    lora_dropout=0.05,
    target_modules=["embed_tokens", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    bias="none",
    task_type="CAUSAL_LM",
)

# Converting the NeuronTrainingArguments to a dictionary to feed them to the NeuronSFTConfig.
args = training_args.to_dict()

sft_config = NeuronSFTConfig(
    max_seq_length=2048,
    packing=True,
    **args,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = "<|finetune_right_pad_id|>"

# Set chat template for Llama 3.1 format
tokenizer.chat_template = (
    "{% for message in messages %}"
    "{% if message['role'] == 'system' %}"
    "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{{ message['content'] }}<|eot_id|>"
    "{% elif message['role'] == 'user' %}"
    "<|start_header_id|>user<|end_header_id|>\n\n{{ message['content'] }}<|eot_id|>"
    "{% elif message['role'] == 'assistant' %}"
    "<|start_header_id|>assistant<|end_header_id|>\n\n{{ message['content'] }}<|eot_id|>"
    "{% endif %}"
    "{% endfor %}"
    "{% if add_generation_prompt %}"
    "<|start_header_id|>assistant<|end_header_id|>\n\n"
    "{% endif %}"
)

# The NeuronSFTTrainer will use `format_dolly` to format the dataset and `lora_config` to apply LoRA on the
# model.
trainer = NeuronSFTTrainer(
    args=sft_config,
    model=model,
    peft_config=lora_config,
    tokenizer=tokenizer,
    train_dataset=dataset,
    formatting_func=lambda example: format_dolly(example, tokenizer),
)
trainer.train()
```

📝 **Complete script available:** All steps above are combined in a ready-to-use script [finetune_llama.py](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/llama/finetune_llama.py).


To launch training, just run the following command in your AWS Trainium instance:

```bash
# Flags for Neuron compilation
export NEURON_CC_FLAGS="--model-type transformer --retry_failed_compilation"
export NEURON_FUSE_SOFTMAX=1
export NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 # Async Runtime
export MALLOC_ARENA_MAX=64 # Host OOM mitigation

# Variables for training
PROCESSES_PER_NODE=32
NUM_EPOCHS=3
TP_DEGREE=8
BS=1
GRADIENT_ACCUMULATION_STEPS=16
LOGGING_STEPS=1
MODEL_NAME="meta-llama/Llama-3.1-8B" # Change this to the desired model name
OUTPUT_DIR="$(echo $MODEL_NAME | cut -d'/' -f2)-finetuned"
DISTRIBUTED_ARGS="--nproc_per_node $PROCESSES_PER_NODE"

if [ "$NEURON_EXTRACT_GRAPHS_ONLY" = "1" ]; then
    MAX_STEPS=5
else
    MAX_STEPS=-1
fi

torchrun --nproc_per_node $PROCESSES_PER_NODE finetune_llama.py \
  --model_id $MODEL_NAME \
  --num_train_epochs $NUM_EPOCHS \
  --do_train \
  --max_steps $MAX_STEPS \
  --per_device_train_batch_size $BS \
  --gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \
  --learning_rate 1e-4 \
  --bf16 \
  --tensor_parallel_size $TP_DEGREE \
  --async_save \
  --warmup_steps 5 \
  --logging_steps $LOGGING_STEPS \
  --output_dir $OUTPUT_DIR \
  --overwrite_output_dir
```

🔧 **Single command execution:** The complete bash training script [finetune_llama.sh](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/llama/finetune_llama.sh) is available:

```bash
./finetune_llama.sh
```

## 4. 🔄 Consolidate and Test the Fine-Tuned Model

Optimum Neuron saves model shards separately during distributed training. These need to be consolidated before use.

Use the Optimum CLI to consolidate:

```bash
optimum-cli neuron consolidate Llama-3.1-8B-finetuned Llama-3.1-8B-finetuned/adapter_default
```

This will create an `adapter_model.safetensors` file, the LoRA adapter weights that we trained in the previous step. We can now reload the model and merge it, so it can be loaded for evaluation:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig


MODEL_NAME = "meta-llama/Llama-3.1-8B"
ADAPTER_PATH = "Llama-3.1-8B-finetuned/adapter_default"
MERGED_MODEL_PATH = "Llama-3.1-8B-dolly"

# Load base model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

# Load adapter configuration and model
adapter_config = PeftConfig.from_pretrained(ADAPTER_PATH)
finetuned_model = PeftModel.from_pretrained(model, ADAPTER_PATH, config=adapter_config)

print("Saving tokenizer")
tokenizer.save_pretrained(MERGED_MODEL_PATH)
print("Saving model")
finetuned_model = finetuned_model.merge_and_unload()
finetuned_model.save_pretrained(MERGED_MODEL_PATH)
```

Once this step is done, it is possible to test the model with a new prompt.

You have successfully created a fine-tuned model from Llama 3.1!

## 5. 🤗 Push to Hugging Face Hub

Share your fine-tuned model with the community by uploading it to the Hugging Face Hub.

**Step 1: Authentication**
```bash
huggingface-cli login
```

**Step 2: Upload your model**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

MERGED_MODEL_PATH = "Llama-3.1-8B-dolly"
HUB_MODEL_NAME = "your-username/llama3.1-8b-dolly"

# Load and push tokenizer
tokenizer = AutoTokenizer.from_pretrained(MERGED_MODEL_PATH)
tokenizer.push_to_hub(HUB_MODEL_NAME)

# Load and push model
model = AutoModelForCausalLM.from_pretrained(MERGED_MODEL_PATH)
model.push_to_hub(HUB_MODEL_NAME)
```

🎉 **Your fine-tuned Llama 3.1 model is now available on the Hub for others to use!**

### 🚀 Fine-Tune Qwen3 8B with LoRA
https://huggingface.co/docs/optimum.neuron/v0.4.0/training_tutorials/finetune_qwen3.md

# 🚀 Fine-Tune Qwen3 8B with LoRA

This tutorial shows how to fine-tune the Qwen3 model on AWS Trainium accelerators using optimum-neuron.

**This is based on the [Qwen3 fine-tuning example script](https://github.com/huggingface/optimum-neuron/tree/main/examples/training/qwen3).**

## 1. 🛠️ Setup AWS Environment

We'll use a `trn1.32xlarge` instance with 16 Trainium Accelerators (32 Neuron Cores) and the Hugging Face Neuron Deep Learning AMI.

The Hugging Face AMI includes all required libraries pre-installed:
- `datasets`, `transformers`, `optimum-neuron`
- Neuron SDK packages
- No additional environment setup needed

To create your instance, follow the guide [here](https://huggingface.co/docs/optimum-neuron/ec2-setup).

## 2. 📊 Load and Prepare the Dataset

We'll use the [simple recipes dataset](https://huggingface.co/datasets/tengomucho/simple_recipes) to fine-tune our model for recipe generation.

```
{
    'recipes': "- Preheat oven to 350 degrees\n- Butter two 9x5' loaf pans\n- Cream the sugar and the butter until light and whipped\n- Add the bananas, eggs, lemon juice, orange rind\n- Beat until blended uniformly\n- Be patient, and beat until the banana lumps are gone\n- Sift the dry ingredients together\n- Fold lightly and thoroughly into the banana mixture\n- Pour the batter into prepared loaf pans\n- Bake for 45 to 55 minutes, until the loaves are firm in the middle and the edges begin to pull away from the pans\n- Cool the loaves on racks for 30 minutes before removing from the pans\n- Freezes well",
    'names': 'Beat this banana bread'
}
```

To load the dataset we use the `load_dataset()` method from the `datasets` library.

```python
from random import randrange

from datasets import load_dataset


# Load dataset from the hub
dataset_id = "tengomucho/simple_recipes"
recipes = load_dataset(dataset_id, split="train")

dataset_size = len(recipes)
print(f"dataset size: {dataset_size}")
print(recipes[randrange(dataset_size)])
# dataset size: 20000
```

To tune our model we need to convert our structured examples into a collection of quotes with a given context, so we define our tokenization function that we will be able to map on the dataset.

The dataset should be structured with input-output pairs, where each input is a prompt and the output is the expected response from the model.
We will make use of the model’s tokenizer chat template and preprocess the dataset to be fed to the trainer.

```python
# Preprocesses the dataset
def preprocess_dataset_with_eos(eos_token):
    def preprocess_function(examples):
        recipes = examples["recipes"]
        names = examples["names"]

        chats = []
        for recipe, name in zip(recipes, names):
            # Append the EOS token to the response
            recipe += eos_token

            chat = [
                {"role": "user", "content": f"How can I make {name}?"},
                {"role": "assistant", "content": recipe},
            ]

            chats.append(chat)
        return {"messages": chats}

    dataset = recipes.map(preprocess_function, batched=True, remove_columns=recipes.column_names)
    return dataset

# Structures the dataset into prompt-expected output pairs.
def formatting_function(examples):
    return tokenizer.apply_chat_template(examples["messages"], tokenize=False, add_generation_prompt=False)
```

Note: these functions make references of `eos_token` and `tokenizer`, they are well-defined in the [Python script](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/qwen3/finetune_qwen3.py) to run this tutorial.


## 3. 🎯 Fine-tune Qwen3 with NeuronSFTTrainer and PEFT

For standard PyTorch fine-tuning, you'd typically use [PEFT](https://github.com/huggingface/peft) with LoRA adapters and the [`SFTTrainer`](https://huggingface.co/docs/trl/en/sft_trainer).

On AWS Trainium, `optimum-neuron` provides `NeuronSFTTrainer` as a drop-in replacement.

**Distributed Training on Trainium:**
Since Qwen3 doesn't fit on a single accelerator, we use distributed training techniques:
- Data Parallel (DDP)
- Tensor Parallelism  

Model loading and LoRA configuration work similarly to other accelerators.

Combining all the pieces together, and assuming the dataset has already been loaded, we can write the following code to fine-tune Qwen3 on AWS Trainium:

```python
model_id = "Qwen/Qwen3-8B"

# Define the training arguments
output_dir = "qwen3-finetuned-recipes"
training_args = NeuronTrainingArguments(
    output_dir=output_dir,
    num_train_epochs=3,
    do_train=True,
    max_steps=-1,  # -1 means train until the end of the dataset
    per_device_train_batch_size=1,
    gradient_accumulation_steps=8,
    learning_rate=5e-4,
    bf16=True,
    tensor_parallel_size=8,
    logging_steps=2,
    lr_scheduler_type="cosine",
    overwrite_output_dir=True,
)

# Load the model with the NeuronModelForCausalLM class.
# It will load the model with a custom modeling speficically designed for AWS Trainium.
trn_config = training_args.trn_config
dtype = torch.bfloat16 if training_args.bf16 else torch.float32
model = NeuronModelForCausalLM.from_pretrained(
    model_id,
    trn_config,
    torch_dtype=dtype,
    # Use FlashAttention2 for better performance and to be able to use larger sequence lengths.
    attn_implementation="flash_attention_2",
)

lora_config = LoraConfig(
    r=64,
    lora_alpha=128,
    lora_dropout=0.05,
    target_modules=[
        "embed_tokens",
        "q_proj",
        "v_proj",
        "o_proj",
        "k_proj",
        "up_proj",
        "down_proj",
        "gate_proj",
    ],
    bias="none",
    task_type="CAUSAL_LM",
)

# Converting the NeuronTrainingArguments to a dictionary to feed them to the NeuronSFTConfig.
args = training_args.to_dict()

sft_config = NeuronSFTConfig(
    max_seq_length=4096,
    packing=True,
    **args,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
dataset = preprocess_dataset_with_eos(tokenizer.eos_token)

 def formatting_function(examples):
     return tokenizer.apply_chat_template(examples["messages"], tokenize=False, add_generation_prompt=False)

 # The NeuronSFTTrainer will use `formatting_function` to format the dataset and `lora_config` to apply LoRA on the
 # model.
 trainer = NeuronSFTTrainer(
     args=sft_config,
     model=model,
     peft_config=lora_config,
     tokenizer=tokenizer,
     train_dataset=dataset,
     formatting_func=formatting_function,
 )
 trainer.train()
```

📝 **Complete script available:** All steps above are combined in a ready-to-use script [finetune_qwen3.py](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/qwen3/finetune_qwen3.py).


To launch training, just run the following command in your AWS Trainium instance:

```bash
# Flags for Neuron compilation
export NEURON_CC_FLAGS="--model-type transformer --retry_failed_compilation"
export NEURON_FUSE_SOFTMAX=1
export NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 # Async Runtime
export MALLOC_ARENA_MAX=64 # Host OOM mitigation

# Variables for training
PROCESSES_PER_NODE=32
NUM_EPOCHS=3
TP_DEGREE=8
BS=1
GRADIENT_ACCUMULATION_STEPS=8
LOGGING_STEPS=2
MODEL_NAME="Qwen/Qwen3-8B" # Change this to the desired model name
OUTPUT_DIR="$(echo $MODEL_NAME | cut -d'/' -f2)-finetuned"
DISTRIBUTED_ARGS="--nproc_per_node $PROCESSES_PER_NODE"
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

if [ "$NEURON_EXTRACT_GRAPHS_ONLY" = "1" ]; then
    MAX_STEPS=5
else
    MAX_STEPS=-1
fi

torchrun --nproc_per_node $PROCESSES_PER_NODE finetune_qwen3.py \
  --model_id $MODEL_NAME \
  --num_train_epochs $NUM_EPOCHS \
  --do_train \
  --max_steps $MAX_STEPS \
  --per_device_train_batch_size $BS \
  --gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \
  --learning_rate 8e-4 \
  --bf16 \
  --tensor_parallel_size $TP_DEGREE \
  --zero_1 \
  --async_save \
  --logging_steps $LOGGING_STEPS \
  --output_dir $OUTPUT_DIR \
  --lr_scheduler_type "cosine" \
  --overwrite_output_dir
```

🔧 **Single command execution:** The complete bash training script [finetune_qwen3.sh](https://github.com/huggingface/optimum-neuron/blob/main/examples/training/qwen3/finetune_qwen3.sh) is available:

```bash
./finetune_qwen3.sh
```

## 4. 🔄 Consolidate and Test the Fine-Tuned Model

Optimum Neuron saves model shards separately during distributed training. These need to be consolidated before use.

Use the Optimum CLI to consolidate:

```bash
optimum-cli neuron consolidate Qwen3-8B-finetuned Qwen3-8B-finetuned/adapter_default
```

This will create an `adapter_model.safetensors` file, the LoRA adapter weights that we trained in the previous step. We can now reload the model and merge it, so it can be loaded for evaluation:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig


MODEL_NAME = "Qwen/Qwen3-8B"
ADAPTER_PATH = "Qwen3-8B-finetuned/adapter_default"
MERGED_MODEL_PATH = "Qwen3-8B-recipes"

# Load base model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

# Load adapter configuration and model
adapter_config = PeftConfig.from_pretrained(ADAPTER_PATH)
finetuned_model = PeftModel.from_pretrained(model, ADAPTER_PATH, config=adapter_config)

print("Saving tokenizer")
tokenizer.save_pretrained(MERGED_MODEL_PATH)
print("Saving model")
finetuned_model = finetuned_model.merge_and_unload()
finetuned_model.save_pretrained(MERGED_MODEL_PATH)
```

Once this step is done, it is possible to test the model with a new prompt.

You have successfully created a fine-tuned model from Qwen3!

## 5. 🤗 Push to Hugging Face Hub

Share your fine-tuned model with the community by uploading it to the Hugging Face Hub.

**Step 1: Authentication**
```bash
huggingface-cli login
```

**Step 2: Upload your model**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

MERGED_MODEL_PATH = "Qwen3-8B-recipes"
HUB_MODEL_NAME = "your-username/qwen3-8b-recipes"

# Load and push tokenizer
tokenizer = AutoTokenizer.from_pretrained(MERGED_MODEL_PATH)
tokenizer.push_to_hub(HUB_MODEL_NAME)

# Load and push model
model = AutoModelForCausalLM.from_pretrained(MERGED_MODEL_PATH)
model.push_to_hub(HUB_MODEL_NAME)
```

🎉 **Your fine-tuned Qwen3 model is now available on the Hub for others to use!**

### Models
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/modeling_auto.md

# Models

## Generic model classes

### NeuronTracedModel[[optimum.neuron.NeuronTracedModel]]

The `NeuronTracedModel` class is available for instantiating a base Neuron model without a specific head.
It is used as the base class for all tasks but text generation.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronTracedModel</name><anchor>optimum.neuron.NeuronTracedModel</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_traced.py#L72</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Base class running compiled and optimized models on Neuron devices.

It implements generic methods for interacting with the Hugging Face Hub as well as compiling vanilla
transformers models to neuron-optimized TorchScript module and export it using `optimum.exporters.neuron` toolchain.

Class attributes:
- model_type (`str`, *optional*, defaults to `"neuron_model"`) -- The name of the model type to use when
registering the NeuronTracedModel classes.
- auto_model_class (`Type`, *optional*, defaults to `AutoModel`) -- The `AutoModel` class to be represented by the
current NeuronTracedModel class.

Common attributes:
- model (`torch.jit._script.ScriptModule`) -- The loaded `ScriptModule` compiled for neuron devices.
- config ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/configuration#transformers.PretrainedConfig)) -- The configuration of the model.
- model_save_dir (`Path`) -- The directory where a neuron compiled model is saved.
By default, if the loaded model is local, the directory where the original model will be used. Otherwise, the
cache directory will be used.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>can_generate</name><anchor>optimum.neuron.NeuronTracedModel.can_generate</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_traced.py#L661</source><parameters>[]</parameters></docstring>

Returns whether this model can generate sequences with `.generate()`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_input_static_shapes</name><anchor>optimum.neuron.NeuronTracedModel.get_input_static_shapes</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_traced.py#L516</source><parameters>[{"name": "neuron_config", "val": ": NeuronDefaultConfig"}]</parameters></docstring>

Gets a dictionary of inputs with their valid static shapes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_model</name><anchor>optimum.neuron.NeuronTracedModel.load_model</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_traced.py#L120</source><parameters>[{"name": "path", "val": ": str | pathlib.Path"}, {"name": "to_neuron", "val": ": bool = False"}, {"name": "device_id", "val": ": int = 0"}]</parameters><paramsdesc>- **path** (`str | Path`) --
  Path of the compiled model.
- **to_neuron** (`bool`, defaults to `False`) --
  Whether to move manually the traced model to NeuronCore. It's only needed when `inline_weights_to_neff=False`, otherwise it is loaded automatically to a Neuron device.
- **device_id** (`int`, defaults to 0) --
  Index of NeuronCore to load the traced model to.</paramsdesc><paramgroups>0</paramgroups></docstring>

Loads a TorchScript module compiled by neuron(x)-cc compiler. It will be first loaded onto CPU and then moved to
one or multiple [NeuronCore](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/neuroncores-arch.html).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>remove_padding</name><anchor>optimum.neuron.NeuronTracedModel.remove_padding</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_traced.py#L613</source><parameters>[{"name": "outputs", "val": ": list[torch.Tensor]"}, {"name": "dims", "val": ": list[int]"}, {"name": "indices", "val": ": list[int]"}, {"name": "padding_side", "val": ": typing.Literal['right', 'left'] = 'right'"}]</parameters><paramsdesc>- **outputs** (`list[torch.Tensor]`) --
  List of torch tensors which are inference output.
- **dims** (`list[int]`) --
  List of dimensions in which we slice a tensor.
- **indices** (`list[int]`) --
  List of indices in which we slice a tensor along an axis.
- **padding_side** (`Literal["right", "left"]`, defaults to "right") --
  The side on which the padding has been applied.</paramsdesc><paramgroups>0</paramgroups></docstring>

Removes padding from output tensors.




</div></div>

## Natural Language Processing

The following Neuron model classes are available for natural language processing tasks.

### NeuronModelForFeatureExtraction[[optimum.neuron.NeuronModelForFeatureExtraction]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForFeatureExtraction</name><anchor>optimum.neuron.NeuronModelForFeatureExtraction</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L92</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a BaseModelOutput for feature-extraction tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Feature Extraction model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForFeatureExtraction.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L99</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForFeatureExtraction` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForFeatureExtraction.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForFeatureExtraction

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")
>>> model = NeuronModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")

>>> inputs = tokenizer("Dear Evan Hansen is the winner of six Tony Awards.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 13, 384]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForSentenceTransformers[[optimum.neuron.NeuronModelForSentenceTransformers]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForSentenceTransformers</name><anchor>optimum.neuron.NeuronModelForSentenceTransformers</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L144</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model for Sentence Transformers.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Sentence Transformers model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForSentenceTransformers.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L152</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForSentenceTransformers` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSentenceTransformers.forward.example">

Text Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bge-base-en-v1.5-neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/bge-base-en-v1.5-neuronx")

>>> inputs = tokenizer("In the smouldering promise of the fall of Troy, a mythical world of gods and mortals rises from the ashes.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> token_embeddings = outputs.token_embeddings
>>> sentence_embedding = = outputs.sentence_embedding
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSentenceTransformers.forward.example-2">

Image Example:

```python
>>> from PIL import Image
>>> from transformers import AutoProcessor
>>> from sentence_transformers import util
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> processor = AutoProcessor.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> util.http_get("https://github.com/UKPLab/sentence-transformers/raw/master/examples/sentence_transformer/applications/image-search/two_dogs_in_snow.jpg", "two_dogs_in_snow.jpg")
>>> inputs = processor(
>>>     text=["Two dogs in the snow", 'A cat on a table', 'A picture of London at night'], images=Image.open("two_dogs_in_snow.jpg"), return_tensors="pt", padding=True
>>> )

>>> outputs = model(**inputs)
>>> cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds)  # Compute cosine similarities
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForMaskedLM[[optimum.neuron.NeuronModelForMaskedLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForMaskedLM</name><anchor>optimum.neuron.NeuronModelForMaskedLM</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L210</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a MaskedLMOutput for masked language modeling tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Masked language model for on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForMaskedLM.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L217</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForMaskedLM` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForMaskedLM.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMaskedLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/legal-bert-base-uncased-neuronx")
>>> model = NeuronModelForMaskedLM.from_pretrained("optimum/legal-bert-base-uncased-neuronx")

>>> inputs = tokenizer("This [MASK] Agreement is between General Motors and John Murray.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 13, 30522]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForSequenceClassification[[optimum.neuron.NeuronModelForSequenceClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForSequenceClassification</name><anchor>optimum.neuron.NeuronModelForSequenceClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L304</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Sequence Classification model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForSequenceClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L311</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForSequenceClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSequenceClassification.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")
>>> model = NeuronModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")

>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForQuestionAnswering[[optimum.neuron.NeuronModelForQuestionAnswering]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForQuestionAnswering</name><anchor>optimum.neuron.NeuronModelForQuestionAnswering</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L256</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Question Answering model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForQuestionAnswering.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L263</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForQuestionAnswering` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForQuestionAnswering.forward.example">

Example:

```python
>>> import torch
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2-neuronx")
>>> model = NeuronModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2-neuronx")

>>> question, text = "Are there wheelchair spaces in the theatres?", "Yes, we have reserved wheelchair spaces with a good view."
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([12])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForTokenClassification[[optimum.neuron.NeuronModelForTokenClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForTokenClassification</name><anchor>optimum.neuron.NeuronModelForTokenClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L351</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Token Classification model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForTokenClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L358</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForTokenClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForTokenClassification.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER-neuronx")
>>> model = NeuronModelForTokenClassification.from_pretrained("optimum/bert-base-NER-neuronx")

>>> inputs = tokenizer("Lin-Manuel Miranda is an American songwriter, actor, singer, filmmaker, and playwright.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 20, 9]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForMultipleChoice[[optimum.neuron.NeuronModelForMultipleChoice]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForMultipleChoice</name><anchor>optimum.neuron.NeuronModelForMultipleChoice</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L399</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Multiple choice model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForMultipleChoice.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L406</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, num_choices, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, num_choices, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, num_choices, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForMultipleChoice` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForMultipleChoice.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMultipleChoice

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx")
>>> model = NeuronModelForMultipleChoice.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx")

>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
...     "A drum line passes by walking down the street playing their instruments.",
...     "A drum line has heard approaching them.",
...     "A drum line arrives and they're outside dancing and asleep.",
...     "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)

# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
...     inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> logits.shape
[1, 4]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForCausalLM[[optimum.neuron.NeuronModelForCausalLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForCausalLM</name><anchor>optimum.neuron.NeuronModelForCausalLM</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_decoder.py#L106</source><parameters>[{"name": "device", "val": ": device = device(type='cpu')"}]</parameters></docstring>

Neuron model with a causal language modeling head for inference on Neuron devices.

This model inherits from `~neuron.NeuronModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForCausalLM.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_base.py#L42</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronModelForSeq2SeqLM[[optimum.neuron.NeuronModelForSeq2SeqLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForSeq2SeqLM</name><anchor>optimum.neuron.NeuronModelForSeq2SeqLM</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_seq2seq.py#L446</source><parameters>[{"name": "encoder", "val": ": ScriptModule"}, {"name": "decoder", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "encoder_file_name", "val": ": str | None = 'model.neuron'"}, {"name": "decoder_file_name", "val": ": str | None = 'model.neuron'"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig'] | None = None"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig'] | None = None"}, {"name": "generation_config", "val": ": transformers.generation.configuration_utils.GenerationConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **encoder** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module of the encoder with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.
- **decoder** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module of the decoder with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.
- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Sequence-to-sequence model with a language modeling head for text2text-generation tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForSeq2SeqLM.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_seq2seq.py#L450</source><parameters>[{"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_attention_mask", "val": ": torch.BoolTensor | None = None"}, {"name": "encoder_outputs", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "beam_scores", "val": ": torch.FloatTensor | None = None"}, {"name": "return_dict", "val": ": bool = False"}, {"name": "output_attentions", "val": ": bool = False"}, {"name": "output_hidden_states", "val": ": bool = False"}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)</paramsdesc><paramgroups>0</paramgroups></docstring>
The [NeuronModelForSeq2SeqLM](/docs/optimum.neuron/v0.4.0/en/model_doc/modeling_auto#optimum.neuron.NeuronModelForSeq2SeqLM) forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



*(Following models are compiled with neuronx compiler and can only be run on INF2.)*
<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSeq2SeqLM.forward.example">

Example of text-to-text generation:

```python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForSeq2SeqLM
# export
neuron_model = NeuronModelForSeq2SeqLM.from_pretrained(google-t5/t5-small, export=True, dynamic_batch_size=False, batch_size=1, sequence_length=64, num_beams=4)
neuron_model.save_pretrained("t5_small_neuronx")
del neuron_model

# inference
neuron_model = NeuronModelForSeq2SeqLM.from_pretrained("t5_small_neuronx")
tokenizer = AutoTokenizer.from_pretrained("t5_small_neuronx")
inputs = tokenizer("translate English to German: Lets eat good food.", return_tensors="pt")

output = neuron_model.generate(
    **inputs,
    num_return_sequences=1,
)
results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]
```

</ExampleCodeBlock>

*(For large models, in order to fit into Neuron cores, we need to apply tensor parallelism. Here below is an example ran on `inf2.24xlarge`.)*
<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSeq2SeqLM.forward.example-2">

Example of text-to-text generation with tensor parallelism:
```python
from transformers import AutoTokenizer
from optimum.neuron import NeuronModelForSeq2SeqLM
# export
if __name__ == "__main__":  # compulsory for parallel tracing since the API will spawn multiple processes.
    neuron_model = NeuronModelForSeq2SeqLM.from_pretrained(
        google/flan-t5-xl, export=True, tensor_parallel_size=8, dynamic_batch_size=False, batch_size=1, sequence_length=128, num_beams=4,
    )
    neuron_model.save_pretrained("flan_t5_xl_neuronx_tp8")
    del neuron_model
# inference
neuron_model = NeuronModelForSeq2SeqLM.from_pretrained("flan_t5_xl_neuronx_tp8")
tokenizer = AutoTokenizer.from_pretrained("flan_t5_xl_neuronx_tp8")
inputs = tokenizer("translate English to German: Lets eat good food.", return_tensors="pt")

output = neuron_model.generate(
    **inputs,
    num_return_sequences=1,
)
results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]
```

</ExampleCodeBlock>


</div></div>

## Computer Vision

The following Neuron model classes are available for computer vision tasks.

### NeuronModelForImageClassification[[optimum.neuron.NeuronModelForImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForImageClassification</name><anchor>optimum.neuron.NeuronModelForImageClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L446</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model for image-classification tasks. This class officially supports beit, convnext, convnextv2, deit, levit, mobilenet_v2, mobilevit, vit, etc.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForImageClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L460</source><parameters>[{"name": "pixel_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForImageClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForImageClassification.forward.example">

Example:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronModelForImageClassification
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("optimum/vit-base-patch16-224-neuronx")
>>> model = NeuronModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224-neuronx")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_label = logits.argmax(-1).item()
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForSemanticSegmentation[[optimum.neuron.NeuronModelForSemanticSegmentation]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForSemanticSegmentation</name><anchor>optimum.neuron.NeuronModelForSemanticSegmentation</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L493</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a semantic segmentation head on top, e.g. for Pascal VOC.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model for semantic-segmentation, with an all-MLP decode head on top e.g. for ADE20k, CityScapes. This class officially supports mobilevit, mobilenet-v2, etc.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForSemanticSegmentation.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L507</source><parameters>[{"name": "pixel_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForSemanticSegmentation` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSemanticSegmentation.forward.example">

Example:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronModelForSemanticSegmentation
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("optimum/deeplabv3-mobilevit-small-neuronx")
>>> model = NeuronModelForSemanticSegmentation.from_pretrained("optimum/deeplabv3-mobilevit-small-neuronx")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForObjectDetection[[optimum.neuron.NeuronModelForObjectDetection]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForObjectDetection</name><anchor>optimum.neuron.NeuronModelForObjectDetection</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L540</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with object detection heads on top, for tasks such as COCO detection.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model for object-detection, with object detection heads on top, for tasks such as COCO detection.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForObjectDetection.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L554</source><parameters>[{"name": "pixel_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForObjectDetection` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForObjectDetection.forward.example">

Example:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronModelForObjectDetection
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
>>> model = NeuronModelForObjectDetection.from_pretrained("hustvl/yolos-tiny")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0]
```

</ExampleCodeBlock>


</div></div>

## Audio

The following auto classes are available for the following audio tasks.

### NeuronModelForAudioClassification[[optimum.neuron.NeuronModelForAudioClassification]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForAudioClassification</name><anchor>optimum.neuron.NeuronModelForAudioClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L589</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with an audio classification head.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model for audio-classification, with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForAudioClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L597</source><parameters>[{"name": "input_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForAudioClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForAudioClassification.forward.example">

Example:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronModelForAudioClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoProcessor.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-audio-classification")
>>> model = NeuronModelForAudioClassification.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-audio-classification")

>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")

>>> logits = model(**inputs).logits
>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForAudioFrameClassification[[optimum.neuron.NeuronModelForAudioFrameClassification]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForAudioFrameClassification</name><anchor>optimum.neuron.NeuronModelForAudioFrameClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L630</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with an audio frame classification head.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model with a frame classification head on top for tasks like Speaker Diarization.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForAudioFrameClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L637</source><parameters>[{"name": "input_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForAudioFrameClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForAudioFrameClassification.forward.example">

Example:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronModelForAudioFrameClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoProcessor.from_pretrained("Jingya/wav2vec2-base-superb-sd-neuronx")
>>> model =  NeuronModelForAudioFrameClassification.from_pretrained("Jingya/wav2vec2-base-superb-sd-neuronx")

>>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
>>> logits = model(**inputs).logits

>>> probabilities = torch.sigmoid(logits[0])
>>> labels = (probabilities > 0.5).long()
>>> labels[0].tolist()
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForCTC[[optimum.neuron.NeuronModelForCTC]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForCTC</name><anchor>optimum.neuron.NeuronModelForCTC</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L670</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a connectionist temporal classification head.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model with a language modeling head on top for Connectionist Temporal Classification (CTC).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForCTC.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L678</source><parameters>[{"name": "input_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForCTC` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForCTC.forward.example">

Example:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronModelForCTC
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> processor = AutoProcessor.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-ctc")
>>> model = NeuronModelForCTC.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-ctc")

>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)

>>> transcription = processor.batch_decode(predicted_ids)
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForCTC.forward.example-2">

Example using `optimum.neuron.pipeline`:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronModelForCTC, pipeline

>>> processor = AutoProcessor.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-ctc")
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")

>>> model = NeuronModelForCTC.from_pretrained("Jingya/wav2vec2-large-960h-lv60-self-neuronx-ctc")
>>> asr = pipeline("automatic-speech-recognition", model=model, feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer)
```

</ExampleCodeBlock>


</div></div>

### NeuronModelForXVector[[optimum.neuron.NeuronModelForXVector]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForXVector</name><anchor>optimum.neuron.NeuronModelForXVector</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L711</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with an XVector feature extraction head on top for tasks like Speaker Verification.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Neuron Model with an XVector feature extraction head on top for tasks like Speaker Verification.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForXVector.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L718</source><parameters>[{"name": "input_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForXVector` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForXVector.forward.example">

Example:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronModelForXVector
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoProcessor.from_pretrained("Jingya/wav2vec2-base-superb-sv-neuronx")
>>> model = NeuronModelForXVector.from_pretrained("Jingya/wav2vec2-base-superb-sv-neuronx")

>>> inputs = feature_extractor(
...     [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
>>> embeddings = model(**inputs).embeddings

>>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1)

>>> cosine_sim = torch.nn.CosineSimilarity(dim=-1)
>>> similarity = cosine_sim(embeddings[0], embeddings[1])
>>> threshold = 0.7
>>> if similarity < threshold:
...     print("Speakers are not the same!")
>>> round(similarity.item(), 2)
```

</ExampleCodeBlock>


</div></div>

## Stable Diffusion

The following Neuron model classes are available for stable diffusion tasks.

### NeuronStableDiffusionPipeline[[optimum.neuron.NeuronStableDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1536</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionImg2ImgPipeline[[optimum.neuron.NeuronStableDiffusionImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionImg2ImgPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1549</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionInpaintPipeline[[optimum.neuron.NeuronStableDiffusionInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionInpaintPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1554</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronLatentConsistencyModelPipeline[[optimum.neuron.NeuronLatentConsistencyModelPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronLatentConsistencyModelPipeline</name><anchor>optimum.neuron.NeuronLatentConsistencyModelPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1567</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronLatentConsistencyModelPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionControlNetPipeline[[optimum.neuron.NeuronStableDiffusionControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionControlNetPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionControlNetPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1572</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/pipelines/diffusers/pipeline_controlnet.py#L33</source><parameters>[{"name": "prompt", "val": ": str | list[str] | None = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": list[int] | None = None"}, {"name": "sigmas", "val": ": list[float] | None = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": list[torch.Tensor] | None = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "controlnet_conditioning_scale", "val": ": float | list[float] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": float | list[float] = 0.0"}, {"name": "control_guidance_end", "val": ": float | list[float] = 1.0"}, {"name": "clip_skip", "val": ": int | None = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": list[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`"PipelineImageInput" | None`, defaults to `None`) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet. When `prompt` is a list, and if a list of images is passed for a single
  ControlNet, each will be paired with each prompt in the `prompt` list. This also applies to multiple
  ControlNets, where a list of image lists can be passed to batch for each prompt and each ControlNet.
- **num_inference_steps** (`int`, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`list[int] | None`, defaults to `None`) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`list[int] | None`, defaults to `None`) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, defaults to 1) --
  The number of images to generate per prompt. If it is different from the batch size used for the compiltaion,
  it will be overridden by the static batch size of neuron (except for dynamic batching).
- **eta** (`float`, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
  to the `diffusers.schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator | list[torch.Generator] | None`, defaults to `None`) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput | None`, defaults to `None`): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`list[torch.Tensor] | None`, defaults to `None`) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict[str, Any] | None`, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float | list[float]`, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float | list[float]`, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float | list[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int | None`, defaults to `None`) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable[[int, int, dict], None] | PipelineCallback | MultiPipelineCallbacks | None`, defaults to `None`) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`list[str]`, defaults to `["latents"]`) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.








</div></div>

### NeuronPixArtAlphaPipeline[[optimum.neuron.NeuronPixArtAlphaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronPixArtAlphaPipeline</name><anchor>optimum.neuron.NeuronPixArtAlphaPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1579</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronPixArtAlphaPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionXLPipeline[[optimum.neuron.NeuronStableDiffusionXLPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1597</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionXLImg2ImgPipeline[[optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1610</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionXLInpaintPipeline[[optimum.neuron.NeuronStableDiffusionXLInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLInpaintPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1617</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

### NeuronStableDiffusionXLControlNetPipeline[[optimum.neuron.NeuronStableDiffusionXLControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLControlNetPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLControlNetPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1624</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/pipelines/diffusers/pipeline_controlnet_sd_xl.py#L37</source><parameters>[{"name": "prompt", "val": ": str | list[str] | None = None"}, {"name": "prompt_2", "val": ": str | list[str] | None = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": list[int] | None = None"}, {"name": "sigmas", "val": ": list[float] | None = None"}, {"name": "denoising_end", "val": ": float | None = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "negative_prompt_2", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "pooled_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": list[torch.Tensor] | None = None"}, {"name": "output_type", "val": ": str | None = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "controlnet_conditioning_scale", "val": ": float | list[float] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": float | list[float] = 0.0"}, {"name": "control_guidance_end", "val": ": float | list[float] = 1.0"}, {"name": "original_size", "val": ": tuple[int, int] | None = None"}, {"name": "crops_coords_top_left", "val": ": tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": tuple[int, int] | None = None"}, {"name": "negative_original_size", "val": ": tuple[int, int] | None = None"}, {"name": "negative_crops_coords_top_left", "val": ": tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": tuple[int, int] | None = None"}, {"name": "clip_skip", "val": ": int | None = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": list[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str | list[str]`, defaults to `None`) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str | list[str]`, defaults to `None`) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **image** (`PipelineImageInput | None`, defaults to `None`) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image default to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **num_inference_steps** (`int`, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`list[int] | None`, defaults to `None`) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used. Must be in descending order.
- **sigmas** (`list[float] | None`, defaults to `None`) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float | None`, defaults to `None`) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
  to the `diffusers.schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator | list[torch.Generator] | None`, defaults to `None`) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **ip_adapter_image** (`PipelineImageInput | None`, defaults to `None`) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`list[torch.Tensor] | None`, defaults to `None`) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length the same as the number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str | None`, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict[str, Any] | None`, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float | list[float]`, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float | list[float]`, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float | list[float]`, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **original_size** (`tuple[int, int] | None`, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size`, the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`tuple[int, int]`, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`tuple[int, int] | None`, defaults to `None`) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified, it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`tuple[int, int] | None`, defaults to `None`) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`tuple[int, int]`, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`tuple[int, int] | None`, defaults to `None`) --
  To negatively condition the generation process based on a target image resolution. It should be the same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int | None`, defaults to `None`) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable[[int, int, dict], None] | PipelineCallback | MultiPipelineCallbacks | None`, defaults to `None`) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`list[str]`, defaults to `["latents"]`) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.


Examples:






</div></div>

### IP-Adapter
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/ip_adapter.md

# IP-Adapter

## Overview

[IP-Adapter](https://hf.co/papers/2308.06721) is an image prompt adapter that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. Furthermore, this adapter can be reused with other models finetuned from the same base model and it can be combined with other adapters like [ControlNet](../using-diffusers/controlnet). The key idea behind IP-Adapter is the *decoupled cross-attention* mechanism which adds a separate cross-attention layer just for image features instead of using the same cross-attention layer for both text and image features. This allows the model to learn more image-specific features.

🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy models, you will need to compile them to TorchScript optimized for AWS Neuron.

You can either compile and export a Stable Diffusion Checkpoint via CLI or `NeuronStableDiffusionPipeline` class.

### Option 1: CLI

Here is an example of exporting stable diffusion components with `Optimum` CLI:

```bash
optimum-cli export neuron --model stable-diffusion-v1-5/stable-diffusion-v1-5
    --ip_adapter_id h94/IP-Adapter
    --ip_adapter_subfolder models
    --ip_adapter_weight_name ip-adapter-full-face_sd15.bin
    --ip_adapter_scale 0.5
    --batch_size 1 --height 512 --width 512 --num_images_per_prompt 1
    --auto_cast matmul --auto_cast_type bf16 ip_adapter_neuron/
```

> [!TIP]
> We recommend using a `inf2.8xlarge` or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on `inf2.xlarge` to reduce the expenses. In this case, don't forget to disable validation of inference by adding the `--disable-validation` argument.

### Option 2: Python API

Here is an example of exporting stable diffusion components with `NeuronStableDiffusionPipeline`:

```python
from optimum.neuron import NeuronStableDiffusionPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
input_shapes = {"batch_size": 1, "height": 512, "width": 512}

stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained(
    model_id,
    export=True,
    ip_adapter_id="h94/IP-Adapter",
    ip_adapter_subfolder="models",
    ip_adapter_weight_name="ip-adapter-full-face_sd15.bin",
    ip_adapter_scale=0.5,
    **compiler_args,
    **input_shapes,
)

# Save locally or upload to the HuggingFace Hub
save_directory = "ip_adapter_neuron/"
stable_diffusion.save_pretrained(save_directory)
```

## Text-to-Image

* With `ip_adapter_image` as input

```python
from optimum.neuron import NeuronStableDiffusionPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
input_shapes = {"batch_size": 1, "height": 512, "width": 512}

stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained(
    model_id,
    export=True,
    ip_adapter_id="h94/IP-Adapter",
    ip_adapter_subfolder="models",
    ip_adapter_weight_name="ip-adapter-full-face_sd15.bin",
    ip_adapter_scale=0.5,
    **compiler_args,
    **input_shapes,
)

# Save locally or upload to the HuggingFace Hub
save_directory = "ip_adapter_neuron/"
stable_diffusion.save_pretrained(save_directory)
```

* With `ip_adapter_image_embeds` as input (encode the image first)

```python
image_embeds = stable_diffusion.prepare_ip_adapter_image_embeds(
    ip_adapter_image=image,
    ip_adapter_image_embeds=None,
    device=None,
    num_images_per_prompt=1,
    do_classifier_free_guidance=True,
)
torch.save(image_embeds, "image_embeds.ipadpt")

image_embeds = torch.load("image_embeds.ipadpt")
images = stable_diffusion(
    prompt="a polar bear sitting in a chair drinking a milkshake",
    ip_adapter_image_embeds=image_embeds,
    negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
    num_inference_steps=100,
    generator=generator,
).images[0]

image.save("polar_bear.png")
```

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### PixArt-Σ
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pixart_sigma.md

# PixArt-Σ

## Overview

[PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation](https://huggingface.co/papers/2403.04692) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.

Some notes about this pipeline:

* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](https://hf.co/docs/transformers/model_doc/dit).
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-sigma/blob/master/diffusion/data/datasets/utils.py).
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as PixArt-α, Stable Diffusion XL, Playground V2.0 and DALL-E 3, while being more efficient than them.
* It shows the ability of generating super high resolution images, such as 2048px or even 4K.
* It shows that text-to-image models can grow from a weak model to a stronger one through several improvements (VAEs, datasets, and so on.)

🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy models in the PixArt-Σ pipeline, you will need to compile them to TorchScript optimized for AWS Neuron. There are four components which need to be exported to the `.neuron` format to boost the performance:

* Text encoder
* Transformer
* VAE encoder
* VAE decoder

You can either compile and export a PixArt-Σ Checkpoint via CLI or `NeuronPixArtSigmaPipeline` class.

### Option 1: CLI

```bash
optimum-cli export neuron --model Jingya/pixart_sigma_pipe_xl_2_512_ms --batch_size 1 --height 512 --width 512 --num_images_per_prompt 1 --torch_dtype bfloat16 --sequence_length 120 pixart_sigma_neuron_512/
```

> [!TIP]
> We recommend using a `inf2.8xlarge` or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on `inf2.xlarge` to reduce the expenses. In this case, don't forget to disable validation of inference by adding the `--disable-validation` argument.

### Option 2: Python API

```python
import torch
from optimum.neuron import NeuronPixArtSigmaPipeline

# Compile
compiler_args = {"auto_cast": "none"}
input_shapes = {"batch_size": 1, "height": 512, "width": 512, "sequence_length": 120}

neuron_model = NeuronPixArtSigmaPipeline.from_pretrained("Jingya/pixart_sigma_pipe_xl_2_512_ms", torch_dtype=torch.bfloat16, export=True, disable_neuron_cache=True, **compiler_args, **input_shapes)

# Save locally
neuron_model.save_pretrained("pixart_sigma_neuron_512/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "pixart_sigma_neuron_512/", repository_id="optimum/pixart_sigma_pipe_xl_2_512_ms_neuronx"  # Replace with your HF Hub repo id
)
```

## Text-to-Image

`NeuronPixArtSigmaPipeline` class allows you to generate images from a text prompt on neuron devices similar to the experience with `Diffusers`.

With pre-compiled PixArt-Σ models, now generate an image with a prompt on Neuron:

```python
from optimum.neuron import NeuronPixArtSigmaPipeline

neuron_model = NeuronPixArtSigmaPipeline.from_pretrained("pixart_sigma_neuron_512/")
prompt = "Oppenheimer sits on the beach on a chair, watching a nuclear exposition with a huge mushroom cloud, 120mm."
image = neuron_model(prompt=prompt).images[0]
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/pixart-sigma-oppenheimer.png"
  width="256"
  height="256"
  alt="PixArt-Σ generated image."
/>

## NeuronPixArtSigmaPipeline[[optimum.neuron.NeuronPixArtSigmaPipeline]]

Pipeline for text-to-image generation using PixArt-Σ.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronPixArtSigmaPipeline</name><anchor>optimum.neuron.NeuronPixArtSigmaPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1588</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronPixArtSigmaPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Load adapters
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/lora.md

# Load adapters

There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different.

This guide will show you how to load LoRA weights.

## LoRA

Low-Rank Adaptation is the fastest way for Stable Diffusion to adapt the styles of the generated images. In Optimum Neuron, we support using one or multiple LoRA adapters by fusing their parameters into the original parameters of the text encoder(s) and the unet during the compilation. Here below is an example of compiling stable diffusion models with LoRA adapters of your choice and using the compiled artifacts to generate styled images:

```python

from diffusers import LCMScheduler
from optimum.neuron import NeuronStableDiffusionPipeline


model_id = "Lykon/dreamshaper-7"
adapter_id = "latent-consistency/lcm-lora-sdv1-5"
input_shapes = {"batch_size": 1, "height": 512, "width": 512, "num_images_per_prompt": 1}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

# Compile
pipe = NeuronStableDiffusionPipeline.from_pretrained(
    model_id,
    export=True,
    inline_weights_to_neff=True,  # caveat: performance drop if neff/weights separated, will be improved by a future Neuron sdk release.
    lora_model_ids=adapter_id,
    lora_weight_names="pytorch_lora_weights.safetensors",
    lora_adapter_names="lcm",
    **input_shapes,
    **compiler_args,
)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

# Save locally or upload to the HuggingFace Hub
pipe.save_pretrained("dreamshaper_7_lcm_lora_neuron/")


# Inference
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
```

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/models/03-sd-lora.png"
  width="256"
  height="256"
  alt="stable diffusion generated image with LoRA adapter."
/>


Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### PixArt-α
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pixart_alpha.md

# PixArt-α

## Overview

[PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis](https://huggingface.co/papers/2310.00426) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.

Some notes about this pipeline:

* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](./dit).
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-alpha/blob/08fbbd281ec96866109bdd2cdb75f2f58fb17610/diffusion/data/datasets/utils.py).
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them.

You can find the original codebase at [PixArt-alpha/PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha) and all the available checkpoints at [PixArt-alpha](https://huggingface.co/PixArt-alpha).

🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy models in the PixArt-α pipeline, you will need to compile them to TorchScript optimized for AWS Neuron. There are four components which need to be exported to the `.neuron` format to boost the performance:

* Text encoder
* Transformer
* VAE encoder
* VAE decoder

You can either compile and export a PixArt-α Checkpoint via CLI or `NeuronPixArtAlphaPipeline` class.

### Option 1: CLI

```bash
optimum-cli export neuron --model PixArt-alpha/PixArt-XL-2-512x512 --batch_size 1 --height 512 --width 512 --num_images_per_prompt 1 --torch_dtype bfloat16 --sequence_length 120 pixart_alpha_neuron_512/
```

> [!TIP]
> We recommend using a `inf2.8xlarge` or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on `inf2.xlarge` to reduce the expenses. In this case, don't forget to disable validation of inference by adding the `--disable-validation` argument.

### Option 2: Python API

```python
import torch
from optimum.neuron import NeuronPixArtAlphaPipeline

# Compile
compiler_args = {"auto_cast": "none"}
input_shapes = {"batch_size": 1, "height": 512, "width": 512, "sequence_length": 120}

neuron_model = NeuronPixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-512x512", torch_dtype=torch.bfloat16, export=True, disable_neuron_cache=True, **compiler_args, **input_shapes)

# Save locally
neuron_model.save_pretrained("pixart_alpha_neuron_512/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "pixart_alpha_neuron_512/", repository_id="Jingya/PixArt-XL-2-512x512-neuronx"  # Replace with your HF Hub repo id
)
```

## Text-to-Image

`NeuronPixArtAlphaPipeline` class allows you to generate images from a text prompt on neuron devices similar to the experience with `Diffusers`.

With pre-compiled PixArt-α models, now generate an image with a prompt on Neuron:

```python
from optimum.neuron import NeuronPixArtAlphaPipeline

neuron_model = NeuronPixArtAlphaPipeline.from_pretrained("pixart_alpha_neuron_512/")
prompt = "Oppenheimer sits on the beach on a chair, watching a nuclear exposition with a huge mushroom cloud, 120mm."
image = neuron_model(prompt=prompt).images[0]
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/pixart-alpha-oppenheimer.png"
  width="256"
  height="256"
  alt="PixArt-α generated image."
/>

## NeuronPixArtAlphaPipeline[[optimum.neuron.NeuronPixArtAlphaPipeline]]

Pipeline for text-to-image generation using PixArt-α.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronPixArtAlphaPipeline</name><anchor>optimum.neuron.NeuronPixArtAlphaPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1579</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronPixArtAlphaPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Latent Consistency Models
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/lcm.md

# Latent Consistency Models

## Overview

Latent Consistency Models (LCMs) were proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao](https://huggingface.co/papers/2310.04378). LCMs enable inference with fewer steps on any pre-trained LDMs, including Stable Diffusion and SDXL.

In `optimum-neuron`, you can:
  - Use the class `NeuronLatentConsistencyModelPipeline` to compile and run inference of LCMs distilled from Stable Diffusion (SD) models.
  - And continue to use the class `NeuronStableDiffusionXLPipeline` for LCMs distilled from SDXL models.

Here are examples to compile the LCMs of Stable Diffusion ( [SimianLuo/LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) ) and Stable Diffusion XL( [latent-consistency/lcm-sdxl](https://huggingface.co/latent-consistency/lcm-sdxl) ), and then run inference on AWS Inferentia 2 :

## Export to Neuron

### LCM of Stable Diffusion

```python
from optimum.neuron import NeuronLatentConsistencyModelPipeline

model_id = "SimianLuo/LCM_Dreamshaper_v7"
num_images_per_prompt = 1
input_shapes = {"batch_size": 1, "height": 768, "width": 768, "num_images_per_prompt": num_images_per_prompt}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

stable_diffusion = NeuronLatentConsistencyModelPipeline.from_pretrained(
    model_id, export=True, **compiler_args, **input_shapes
)
save_directory = "lcm_sd_neuron/"
stable_diffusion.save_pretrained(save_directory)

# Push to hub
stable_diffusion.push_to_hub(save_directory, repository_id="my-neuron-repo")  # Replace with your repo id, eg. "Jingya/LCM_Dreamshaper_v7_neuronx"
```

### LCM of Stable Diffusion XL

```python
from optimum.neuron import NeuronStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
unet_id = "latent-consistency/lcm-sdxl"
num_images_per_prompt = 1
input_shapes = {"batch_size": 1, "height": 1024, "width": 1024, "num_images_per_prompt": num_images_per_prompt}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

stable_diffusion = NeuronStableDiffusionXLPipeline.from_pretrained(
    model_id, unet_id=unet_id, export=True, **compiler_args, **input_shapes
)
save_directory = "lcm_sdxl_neuron/"
stable_diffusion.save_pretrained(save_directory)

# Push to hub
stable_diffusion.push_to_hub(save_directory, repository_id="my-neuron-repo")   # Replace with your repo id, eg. "Jingya/lcm-sdxl-neuronx"
```

## Text-to-Image

Now we can generate images from text prompts on Inf2 using the pre-compiled model:

* LCM of Stable Diffusion

```python
from optimum.neuron import NeuronLatentConsistencyModelPipeline

pipe = NeuronLatentConsistencyModelPipeline.from_pretrained("Jingya/LCM_Dreamshaper_v7_neuronx")
prompts = ["Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"] * 2

images = pipe(prompt=prompts, num_inference_steps=4, guidance_scale=8.0).images
```

* LCM of Stable Diffusion XL

```python
from optimum.neuron import NeuronStableDiffusionXLPipeline

pipe = NeuronStableDiffusionXLPipeline.from_pretrained("Jingya/lcm-sdxl-neuronx")
prompts = ["a close-up picture of an old man standing in the rain"] * 2

images = pipe(prompt=prompts, num_inference_steps=4, guidance_scale=8.0).images
```

## NeuronLatentConsistencyModelPipeline[[optimum.neuron.NeuronLatentConsistencyModelPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronLatentConsistencyModelPipeline</name><anchor>optimum.neuron.NeuronLatentConsistencyModelPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1567</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronLatentConsistencyModelPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### InstructPix2Pix
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/pix2pix.md

# InstructPix2Pix

## Overview

[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/papers/2211.09800) is by Tim Brooks, Aleksander Holynski and Alexei A. Efros.


🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy models, you will need to compile them to TorchScript optimized for AWS Neuron. In the case of Stable Diffusion, there are four components which need to be exported to the `.neuron` format to boost the performance:

* Text encoder
* U-Net
* VAE encoder
* VAE decoder

You can either compile and export a Stable Diffusion Checkpoint via CLI or `NeuronStableDiffusionInstructPix2PixPipeline` class.

## Usage Example

With the `NeuronStableDiffusionInstructPix2PixPipeline` class, you can apply instruction-based image editing using both text guidance and image guidance.

```python
import requests
import PIL
from io import BytesIO
from optimum.neuron import NeuronStableDiffusionInstructPix2PixPipeline

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")

model_id = "timbrooks/instruct-pix2pix"
input_shapes = {"batch_size": 1, "height": 512, "width": 512}

pipe = NeuronStableDiffusionInstructPix2PixPipeline.from_pretrained(
  model_id, export=True, dynamic_batch_size=True, **input_shapes,
)
pipe.save_pretrained("sd_ip2p/")

img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
init_image = download_image(img_url).resize((512, 512))

prompt = "Add a beautiful sunset"
image = pipe(prompt=prompt, image=init_image).images[0]
image.save("sunset_mountain.png")
```

`image`          | `prompt` | output |
:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" alt="drawing" width="250"/> | ***Add a beautiful sunset*** | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/11-sd-ip2p.png" alt="drawing" width="250"/> |

#### NeuronStableDiffusionInstructPix2PixPipeline[[optimum.neuron.NeuronStableDiffusionInstructPix2PixPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionInstructPix2PixPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionInstructPix2PixPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1559</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionInstructPix2PixPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Flux
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/flux.md

# Flux

Flux is a series of text-to-image generation models based on diffusion transformers.

> [!TIP]
> We recommend using a `inf2.24xlarge` instance with tensor parallel size 8 for the model compilation and inference.

### Export to Neuron

* Option 1: CLI

```bash
optimum-cli export neuron --model black-forest-labs/FLUX.1-dev --tensor_parallel_size 8 --batch_size 1 --height 1024 --width 1024 --num_images_per_prompt 1 --torch_dtype bfloat16 flux_dev_neuron/
```

* Option 2: Python API

```python
from optimum.neuron import NeuronFluxPipeline

if __name__ == "__main__":
    compiler_args = {"auto_cast": "none"}
    input_shapes = {"batch_size": 1, "height": 1024, "width": 1024}

    pipe = NeuronFluxPipeline.from_pretrained(
        "black-forest-labs/FLUX.1-dev",
        torch_dtype=torch.bfloat16,
        export=True,
        tensor_parallel_size=8,
        **compiler_args,
        **input_shapes
    )

    # Save locally
    pipe.save_pretrained("flux_dev_neuron_1024_tp8/")

    # Upload to the HuggingFace Hub
    pipe.push_to_hub(
        "flux_dev_neuron_1024_tp8/", repository_id="Jingya/FLUX.1-dev-neuronx-1024x1024-tp8"  # Replace with your HF Hub repo id
    )
```

## Guidance-distilled

* The guidance-distilled variant takes about 50 sampling steps for good-quality generation.

```python
from optimum.neuron import NeuronFluxPipeline

pipe = NeuronFluxPipeline.from_pretrained("flux_dev_neuron_1024_tp8/")
prompt = "A cat holding a sign that says hello world"
out = pipe(
    prompt,
    guidance_scale=3.5,
    num_inference_steps=50,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
out.save("flux_optimum.png")
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/flux_optimum.png"
  width="256"
  height="256"
  alt="Flux dev generated image."
/>

## Timestep-distilled

* max_sequence_length cannot be more than 256.
* guidance_scale needs to be 0.
* As this is a timestep-distilled model, it benefits from fewer sampling steps.

```bash
optimum-cli export neuron --model black-forest-labs/FLUX.1-schnell --tensor_parallel_size 8 --batch_size 1 --height 1024 --width 1024 --num_images_per_prompt 1 --sequence_length 256 --torch_dtype bfloat16 flux_schnell_neuron_1024_tp8/
```

```python
import torch
from optimum.neuron import NeuronFluxPipeline

pipe = NeuronFluxPipeline.from_pretrained("flux_schnell_neuron_1024_tp8")
prompt = "A cat holding a sign that says hello world"
out = pipe(prompt, max_sequence_length=256, num_inference_steps=4).images[0]
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/flux_schnell_optimum.png"
  width="256"
  height="256"
  alt="Flux schnell generated image."
/>

## NeuronFluxPipeline[[optimum.neuron.NeuronFluxPipeline]]

The Flux pipeline for text-to-image generation.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronFluxPipeline</name><anchor>optimum.neuron.NeuronFluxPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1631</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronFluxPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

## NeuronFluxInpaintPipeline[[optimum.neuron.NeuronFluxInpaintPipeline]]

The Flux pipeline for image inpainting.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronFluxInpaintPipeline</name><anchor>optimum.neuron.NeuronFluxInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1641</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronFluxInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

With `NeuronFluxInpaintPipeline`, pass the original image and a mask of what you want to replace in the original image. Then replace the masked area with content described in a prompt.

```python
from diffusers.utils import load_image
from optimum.neuron import NeuronFluxInpaintPipeline


pipe = NeuronFluxInpaintPipeline.from_pretrained("Jingya/Flux.1-Schnell-1024x1024-neuronx-tp8")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
source = load_image(img_url)
mask = load_image(mask_url)
images = pipe(prompt=prompt, image=source, mask_image=mask, max_sequence_length=256)
```

## NeuronFluxKontextPipeline[[optimum.neuron.NeuronFluxKontextPipeline]]

The Flux pipeline for image editing.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronFluxKontextPipeline</name><anchor>optimum.neuron.NeuronFluxKontextPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1636</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronFluxKontextPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

With `NeuronFluxKontextPipeline`, pass the original image and a prompt describing what you want to change about the original image.

```python
from diffusers.utils import load_image
from optimum.neuron import NeuronFluxKontextPipeline


pipe = NeuronFluxKontextPipeline.from_pretrained("Jlonge4/FLUX.1-kontext-neuronx-1024x1024-tp8")
prompt = "Change the cushions in the chair from red to green"
img_url = "https://huggingface.co/datasets/Jlonge4/document_images/resolve/main/flux_optimum.png"
source = load_image(img_url)
images = pipe(prompt=prompt, image=source, guidance_scale=2.5)
```

| Image | Prompt | Output |
|:-----:|:------:|:------:|
| <img src="https://huggingface.co/datasets/Jlonge4/document_images/resolve/main/flux_optimum.png" alt="red_cushions" width="250"/> | ***Change the cushions in the chair from red to green*** | <img src="https://huggingface.co/datasets/Jlonge4/document_images/resolve/main/flux_optimum_edit.png" alt="green_cushions" width="250"/> |

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Stable Diffusion XL Turbo
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/sdxl_turbo.md

# Stable Diffusion XL Turbo

## Overview

SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable of running inference in as little as 1 step ([check `🤗diffusers` for more details](https://huggingface.co/docs/diffusers/using-diffusers/sdxl_turbo)).

In `optimum-neuron`, you can:
  - Use the class `NeuronStableDiffusionXLPipeline` to compile and run inference.

Here we will compile the [`stabilityai/sdxl-turbo`](https://huggingface.co/stabilityai/sdxl-turbo) model with Optimum CLI.

## Export to Neuron

```bash
optimum-cli export neuron --model stabilityai/sdxl-turbo --batch_size 1 --height 512 --width 512 --auto_cast matmul --auto_cast_type bf16 sdxl_turbo_neuron/
```

## Text-to-Image

Now we can generate images from text prompts on Inf2 using the pre-compiled model:

```python
from optimum.neuron import NeuronStableDiffusionXLPipeline

pipe = NeuronStableDiffusionXLPipeline.from_pretrained("sdxl_turbo_neuron/", data_parallel_mode="all")
prompt = ["Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"] * 2

images = pipe(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images
```

<Tip>

Inf2 instances contain one or more Neuron devices, and each Neuron device includes 2 NeuronCore-v2. By default, we load the whole pipeline of LCM to both Neuron cores. It means that when the batch size is divisible by 2, you can fully leverage the compute power of both cores.

</Tip>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Stable Diffusion
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/stable_diffusion.md

# Stable Diffusion

## Overview

Stable Diffusion is a text-to-image _latent diffusion_ model built upon the work of the original [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release), and it was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/).

🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy models, you will need to compile them to TorchScript optimized for AWS Neuron. In the case of Stable Diffusion, there are four components which need to be exported to the `.neuron` format to boost the performance:

* Text encoder
* U-Net
* VAE encoder
* VAE decoder

You can either compile and export a Stable Diffusion Checkpoint via CLI or `NeuronStableDiffusionPipeline` class.

### Option 1: cli

Here is an example of exporting stable diffusion components with `Optimum` CLI:

```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-2-1-base \
  --batch_size 1 \
  --height 512 `# height in pixels of generated image, eg. 512, 768` \
  --width 512 `# width in pixels of generated image, eg. 512, 768` \
  --num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron/
```

> [!TIP]
> We recommend using a `inf2.8xlarge` or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on `inf2.xlarge` to reduce the expenses. In this case, don't forget to disable validation of inference by adding the `--disable-validation` argument.

### Option 2: Python API

Here is an example of exporting stable diffusion components with `NeuronStableDiffusionPipeline`:

<Tip>

To apply optimized compute of Unet's attention score, please configure your environment variable with `export NEURON_FUSE_SOFTMAX=1`.

Besides, don't hesitate to tweak the compilation configuration to find the best tradeoff between performance v.s accuracy in your use case. By default, we suggest casting FP32 matrix multiplication operations to BF16 which offers good performance with moderate sacrifice of the accuracy. Check out the guide from [AWS Neuron documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html#neuronx-cc-training-mixed-precision) to better understand the options for your compilation.

</Tip>

```python
>>> from optimum.neuron import NeuronStableDiffusionPipeline

>>> model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
>>> compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
>>> input_shapes = {"batch_size": 1, "height": 512, "width": 512}

>>> stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained(model_id, export=True, **compiler_args, **input_shapes)

# Save locally or upload to the HuggingFace Hub
>>> save_directory = "sd_neuron/"
>>> stable_diffusion.save_pretrained(save_directory)
>>> stable_diffusion.push_to_hub(
...     save_directory, repository_id="my-neuron-repo"
... )
```

## Text-to-Image

`NeuronStableDiffusionPipeline` class allows you to generate images from a text prompt on neuron devices similar to the experience with `Diffusers`.

With pre-compiled Stable Diffusion models, now generate an image with a prompt on Neuron:

```python
>>> from optimum.neuron import NeuronStableDiffusionPipeline

>>> stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained("sd_neuron/")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = stable_diffusion(prompt).images[0]
```

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/models/01-sd-image.png"
  width="256"
  height="256"
  alt="stable diffusion generated image"
/>

## Image-to-Image

With the `NeuronStableDiffusionImg2ImgPipeline` class, you can generate a new image conditioned on a text prompt and an initial image.

```python
import requests
from PIL import Image
from io import BytesIO
from optimum.neuron import NeuronStableDiffusionImg2ImgPipeline

# compile & save
model_id = "nitrosocke/Ghibli-Diffusion"
input_shapes = {"batch_size": 1, "height": 512, "width": 512}
pipeline = NeuronStableDiffusionImg2ImgPipeline.from_pretrained(model_id, export=True, **input_shapes)
pipeline.save_pretrained("sd_img2img/")

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))

prompt = "ghibli style, a fantasy landscape with snowcapped mountains, trees, lake with detailed reflection. sunlight and cloud in the sky, warm colors, 8K"

image = pipeline(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images[0]
image.save("fantasy_landscape.png")
```

`image`          | `prompt` | output |
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/03-sd-img2img-init.png" alt="landscape photo" width="256" height="256"/> | ***ghibli style, a fantasy landscape with snowcapped mountains, trees, lake with detailed reflection. warm colors, 8K*** | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/04-sd-img2img.png" alt="drawing" width="250"/> |

## Inpaint

With the `NeuronStableDiffusionInpaintPipeline` class, you can edit specific parts of an image by providing a mask and a text prompt.

```python
import requests
from PIL import Image
from io import BytesIO
from optimum.neuron import NeuronStableDiffusionInpaintPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-inpainting"
input_shapes = {"batch_size": 1, "height": 512, "width": 512}
pipeline = NeuronStableDiffusionInpaintPipeline.from_pretrained(model_id, export=True, **input_shapes)
pipeline.save_pretrained("sd_inpaint/")

def download_image(url):
    response = requests.get(url)
    return Image.open(BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
image.save("cat_on_bench.png")
```

`image`          | `mask_image` | `prompt` | output |
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="250"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="250"/> | ***Face of a yellow cat, high resolution, sitting on a park bench*** | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/05-sd-inpaint.png" alt="drawing" width="250"/> |


## NeuronStableDiffusionPipeline[[optimum.neuron.NeuronStableDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1536</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

## NeuronStableDiffusionImg2ImgPipeline[[optimum.neuron.NeuronStableDiffusionImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionImg2ImgPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1549</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

## NeuronStableDiffusionInpaintPipeline[[optimum.neuron.NeuronStableDiffusionInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionInpaintPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1554</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### ControlNet
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/controlnet.md

# ControlNet

ControlNet conditions the stable diffusion model with an additional input image. In Optimum Neuron, we support the compilation of one or multiple ControlNet(s) along with the stable diffusion checkpoint. Then you can use the compiled artifacts to generate styled images.

## Export to Neuron

We can either compile one or multiple ControlNet via the Optimum CLI or programmatically via the `NeuronStableDiffusionControlNetPipeline` class by passing the `controlnet_ids`.

### Option 1: CLI

```bash
optimum-cli export neuron -m stable-diffusion-v1-5/stable-diffusion-v1-5 --batch_size 1 --height 512 --width 512 --controlnet_ids lllyasviel/sd-controlnet-canny --num_images_per_prompt 1 sd_neuron_controlnet/
```

### Option 2: Python API

```python
from optimum.neuron import NeuronStableDiffusionControlNetPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
controlnet_id = "lllyasviel/sd-controlnet-canny"

# [Neuron] pipeline
input_shapes = {"batch_size": 1, "height": 512, "width": 512, "num_images_per_prompt": 1}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained(
    model_id,
    controlnet_ids=controlnet_id,
    export=True,
    **input_shapes,
    **compiler_args,
)
pipe.save_pretrained("sd_neuron_controlnet")
```

## Text-to-Image

For text-to-image, we can specify an additional conditioning input.

Here is an example with a canny image, a white outline of an image on a black background. The ControlNet will use the canny image as a control to guide the model to generate an image with the same outline.

```python
import cv2
import numpy as np
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image, make_image_grid
from PIL import Image

from optimum.neuron import NeuronStableDiffusionControlNetPipeline


# prepare canny image
original_image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
)

image = np.array(original_image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

# load pre-compiled neuron model
pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained("sd_neuron_controlnet")
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

# inference
output = pipe("the mona lisa", image=canny_image).images[0]
compare = make_image_grid([original_image, canny_image, output], rows=1, cols=3)
compare.save("compare.png")
```

<img
  src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/10-sd-text2img-controlnet.png?download=true"
  width="768"
  height="256"
  alt="stable diffusion 1.5 generated image with controlnet."
/>


## MultiControlNet

With Optimum Neuron, you can also compose multiple ControlNet conditionings from different image inputs:

* Compile multiple ControlNet for SD1.5

```bash
optimum-cli export neuron --inline-weights-neff --model jyoung105/stable-diffusion-v1-5 --task stable-diffusion --auto_cast matmul --auto_cast_type bf16 --batch_size 1 --num_images_per_prompt 1 --controlnet_ids lllyasviel/control_v11p_sd15_openpose lllyasviel/control_v11f1p_sd15_depth --height 512 --width 512 sd15-512x512-bf16-openpose-depth
```

* Run SD1.5 with OpenPose and Depth conditionings:

```python
import numpy as np
import torch
from PIL import Image

from controlnet_aux import OpenposeDetector
from transformers import pipeline
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image
from optimum.neuron import NeuronStableDiffusionControlNetPipeline


# OpenPose+Depth ControlNet
model_id = "sd15-512x512-bf16-openpose-depth"

# Load ControlNet images

# 1. openpose
image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/input.png")
processor = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
openpose_image = processor(image)

# 2. depth
image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_depth/resolve/main/images/input.png")
depth_estimator = pipeline('depth-estimation')
image = depth_estimator(image)['depth']
image = np.array(image)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
depth_image = Image.fromarray(image)

images = [openpose_image.resize((512, 512)), depth_image.resize((512, 512))]

# 3. inference
pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained(model_id)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
prompt = "a giant in a fantasy landscape, best quality"
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality"

image = pipe(prompt=prompt, image=images).images[0]
image.save('out.png')
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/multicontrolnet.png"
  width="768"
  height="256"
  alt="stable diffusion 1.5 generated image with OpenPose and Depth controlnet."
/>


## ControlNet with Stable Diffusion XL

### Export to Neuron

```bash
optimum-cli export neuron -m stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl --batch_size 1 --height 1024 --width 1024 --controlnet_ids diffusers/controlnet-canny-sdxl-1.0-small --num_images_per_prompt 1 sdxl_neuron_controlnet/
```

### Text-to-Image

```python
import cv2
import numpy as np
from diffusers.utils import load_image
from PIL import Image
from optimum.neuron import NeuronStableDiffusionXLControlNetPipeline

# Inputs
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = "low quality, bad quality, sketches"

image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
)
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)

controlnet_conditioning_scale = 0.5  # recommended for good generalization

pipe = NeuronStableDiffusionXLControlNetPipeline.from_pretrained("sdxl_neuron_controlnet")

images = pipe(
    prompt,
    negative_prompt=negative_prompt,
    image=image,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
).images
images[0].save("hug_lab.png")
```

<img
  src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/12-sdxl-text2img-controlnet.png?download=true"
  width="768"
  height="256"
  alt="stable diffusion xl generated image with controlnet."
/>

## NeuronStableDiffusionControlNetPipeline[[optimum.neuron.NeuronStableDiffusionControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionControlNetPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionControlNetPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1572</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/pipelines/diffusers/pipeline_controlnet.py#L33</source><parameters>[{"name": "prompt", "val": ": str | list[str] | None = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": list[int] | None = None"}, {"name": "sigmas", "val": ": list[float] | None = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": list[torch.Tensor] | None = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "controlnet_conditioning_scale", "val": ": float | list[float] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": float | list[float] = 0.0"}, {"name": "control_guidance_end", "val": ": float | list[float] = 1.0"}, {"name": "clip_skip", "val": ": int | None = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": list[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`"PipelineImageInput" | None`, defaults to `None`) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet. When `prompt` is a list, and if a list of images is passed for a single
  ControlNet, each will be paired with each prompt in the `prompt` list. This also applies to multiple
  ControlNets, where a list of image lists can be passed to batch for each prompt and each ControlNet.
- **num_inference_steps** (`int`, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`list[int] | None`, defaults to `None`) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`list[int] | None`, defaults to `None`) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, defaults to 1) --
  The number of images to generate per prompt. If it is different from the batch size used for the compiltaion,
  it will be overridden by the static batch size of neuron (except for dynamic batching).
- **eta** (`float`, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
  to the `diffusers.schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator | list[torch.Generator] | None`, defaults to `None`) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput | None`, defaults to `None`): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`list[torch.Tensor] | None`, defaults to `None`) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict[str, Any] | None`, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float | list[float]`, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float | list[float]`, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float | list[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int | None`, defaults to `None`) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable[[int, int, dict], None] | PipelineCallback | MultiPipelineCallbacks | None`, defaults to `None`) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`list[str]`, defaults to `["latents"]`) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.








</div></div>

## NeuronStableDiffusionXLControlNetPipeline[[optimum.neuron.NeuronStableDiffusionXLControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLControlNetPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLControlNetPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1624</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/pipelines/diffusers/pipeline_controlnet_sd_xl.py#L37</source><parameters>[{"name": "prompt", "val": ": str | list[str] | None = None"}, {"name": "prompt_2", "val": ": str | list[str] | None = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": list[int] | None = None"}, {"name": "sigmas", "val": ": list[float] | None = None"}, {"name": "denoising_end", "val": ": float | None = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "negative_prompt_2", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "pooled_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": list[torch.Tensor] | None = None"}, {"name": "output_type", "val": ": str | None = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "controlnet_conditioning_scale", "val": ": float | list[float] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": float | list[float] = 0.0"}, {"name": "control_guidance_end", "val": ": float | list[float] = 1.0"}, {"name": "original_size", "val": ": tuple[int, int] | None = None"}, {"name": "crops_coords_top_left", "val": ": tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": tuple[int, int] | None = None"}, {"name": "negative_original_size", "val": ": tuple[int, int] | None = None"}, {"name": "negative_crops_coords_top_left", "val": ": tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": tuple[int, int] | None = None"}, {"name": "clip_skip", "val": ": int | None = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": list[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str | list[str]`, defaults to `None`) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str | list[str]`, defaults to `None`) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **image** (`PipelineImageInput | None`, defaults to `None`) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image default to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **num_inference_steps** (`int`, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`list[int] | None`, defaults to `None`) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used. Must be in descending order.
- **sigmas** (`list[float] | None`, defaults to `None`) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float | None`, defaults to `None`) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str | list[str] | None`, defaults to `None`) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
  to the `diffusers.schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator | list[torch.Generator] | None`, defaults to `None`) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor | None`, defaults to `None`) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **ip_adapter_image** (`PipelineImageInput | None`, defaults to `None`) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`list[torch.Tensor] | None`, defaults to `None`) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length the same as the number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str | None`, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict[str, Any] | None`, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float | list[float]`, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float | list[float]`, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float | list[float]`, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **original_size** (`tuple[int, int] | None`, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size`, the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`tuple[int, int]`, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`tuple[int, int] | None`, defaults to `None`) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified, it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`tuple[int, int] | None`, defaults to `None`) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`tuple[int, int]`, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`tuple[int, int] | None`, defaults to `None`) --
  To negatively condition the generation process based on a target image resolution. It should be the same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int | None`, defaults to `None`) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable[[int, int, dict], None] | PipelineCallback | MultiPipelineCallbacks | None`, defaults to `None`) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`list[str]`, defaults to `["latents"]`) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.


Examples:






</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### Stable Diffusion XL
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/diffusers/stable_diffusion_xl.md

## Stable Diffusion XL

*There is a notebook version of that tutorial [here](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/stable-diffusion/stable-diffusion-xl-txt2img.ipynb)*.

## Overview

Stable Diffusion XL (SDXL) is a latent diffusion model for text-to-image. Compared to the previous versions of Stable Diffusion models, it improves the quality of generated images with a times larger UNet.

🤗 `Optimum` extends `Diffusers` to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron.

## Export to Neuron

To deploy SDXL models, we will start by compiling the models. We support the export of following components in the pipeline to boost the speed:

* Text encoder
* Second text encoder
* U-Net (a three times larger UNet than the one in Stable Diffusion pipeline)
* VAE encoder
* VAE decoder

You can either compile and export a Stable Diffusion XL Checkpoint via CLI or `NeuronStableDiffusionXLPipeline` class.

### Option 1: CLI

Here is an example of exporting SDXL components with `Optimum` CLI:

```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \
  --batch_size 1 \
  --height 1024 `# height in pixels of generated image, eg. 768, 1024` \
  --width 1024 `# width in pixels of generated image, eg. 768, 1024` \
  --num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron_xl/
```

> [!TIP]
> We recommend using a `inf2.8xlarge` or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on `inf2.xlarge` to reduce the expenses. In this case, don't forget to disable validation of inference by adding the `--disable-validation` argument.

### Option 2: Python API

Here is an example of exporting stable diffusion components with `NeuronStableDiffusionXLPipeline`:

```python
>>> from optimum.neuron import NeuronStableDiffusionXLPipeline

>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0"
>>> compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
>>> input_shapes = {"batch_size": 1, "height": 1024, "width": 1024}

>>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained(model_id, export=True, **compiler_args, **input_shapes)

<CopyLLMTxtMenu containerStyle="float: right; margin-left: 10px; display: inline-flex; position: relative; z-index: 10;"></CopyLLMTxtMenu>

# Save locally or upload to the HuggingFace Hub
>>> save_directory = "sd_neuron_xl/"
>>> stable_diffusion_xl.save_pretrained(save_directory)
>>> stable_diffusion_xl.push_to_hub(
...     save_directory, repository_id="my-neuron-repo"
... )
```

## Text-to-Image

With pre-compiled SDXL models, now generate an image with a text prompt on Neuron:

```python
>>> from optimum.neuron import NeuronStableDiffusionXLPipeline

>>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
>>> prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
>>> image = stable_diffusion_xl(prompt).images[0]
```

<img
  src="https://raw.githubusercontent.com/huggingface/optimum-neuron/main/docs/assets/guides/models/02-sdxl-image.jpeg"
  width="256"
  height="256"
  alt="sdxl generated image"
/>

## Image-to-Image

With `NeuronStableDiffusionXLImg2ImgPipeline`, you can pass an initial image, and a text prompt to condition generated images:

```python
from optimum.neuron import NeuronStableDiffusionXLImg2ImgPipeline
from diffusers.utils import load_image

prompt = "a dog running, lake, moat"
url = "https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/sd_xl/castle_friedrich.png"
init_image = load_image(url).convert("RGB")

pipe = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained("sd_neuron_xl/")
image = pipe(prompt=prompt, image=init_image).images[0]
```

`image`          | `prompt` | output |
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/sd_xl/castle_friedrich.png" alt="castle photo" width="256" height="256"/> | ***a dog running, lake, moat*** | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/06-sdxl-img2img.png" alt="castle with dog" width="250"/> |

## Inpaint

With `NeuronStableDiffusionXLInpaintPipeline`, pass the original image and a mask of what you want to replace in the original image. Then replace the masked area with content described in a prompt.

```python
from optimum.neuron import NeuronStableDiffusionXLInpaintPipeline
from diffusers.utils import load_image

img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
mask_url = (
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png"
)

init_image = load_image(img_url).convert("RGB")
mask_image = load_image(mask_url).convert("RGB")
prompt = "A deep sea diver floating"

pipe = NeuronStableDiffusionXLInpaintPipeline.from_pretrained("sd_neuron_xl/")
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0]
```

`image`          | `mask_image` | `prompt` | output |
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" alt="drawing" width="250"/> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" alt="drawing" width="250"/> | ***A deep sea diver floating*** | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/07-sdxl-inpaint.png" alt="drawing" width="250"/> |

## Refine Image Quality

SDXL includes a [refiner model](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0) to denoise low-noise stage images generated from the base model. There are two ways to use the refiner:

1. use the base and refiner model together to produce a refined image.
2. use the base model to produce an image, and subsequently use the refiner model to add more details to the image.

### Base + Refiner Model

```python
from optimum.neuron import NeuronStableDiffusionXLPipeline, NeuronStableDiffusionXLImg2ImgPipeline

prompt = "A majestic lion jumping from a big stone at night"
base = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
image = base(
    prompt=prompt,
    num_inference_steps=40,
    denoising_end=0.8,
    output_type="latent",
).images[0]
del base  # To avoid neuron device OOM

refiner = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained("sd_neuron_xl_refiner/")
image = refiner(
    prompt=prompt,
    num_inference_steps=40,
    denoising_start=0.8,
    image=image,
).images[0]
```

<img
  src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/08-sdxl-base-refine.png"
  width="256"
  height="256"
  alt="sdxl base + refiner"
/>

### Base to refiner model

```python
from optimum.neuron import NeuronStableDiffusionXLPipeline, NeuronStableDiffusionXLImg2ImgPipeline

prompt = "A majestic lion jumping from a big stone at night"
base = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
image = base(prompt=prompt, output_type="latent").images[0]
del base  # To avoid neuron device OOM

refiner = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained("sd_neuron_xl_refiner/")
image = refiner(prompt=prompt, image=image[None, :]).images[0]
```

`Base Image`         | Refined Image |
:-------------------------:|-------------------------:|
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/09-sdxl-base-full.png" alt="drawing" width="250"/> | <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/neuron/models/010-sdxl-refiner-detailed.png" alt="drawing" width="250"/> |

<Tip>

To avoid Neuron device out of memory, it's suggested to finish all base inference and release the device memory before running the refiner.

</Tip>


## NeuronStableDiffusionXLPipeline[[optimum.neuron.NeuronStableDiffusionXLPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1597</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

## NeuronStableDiffusionXLImg2ImgPipeline[[optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1610</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

## NeuronStableDiffusionXLInpaintPipeline[[optimum.neuron.NeuronStableDiffusionXLInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronStableDiffusionXLInpaintPipeline</name><anchor>optimum.neuron.NeuronStableDiffusionXLInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1617</source><parameters>[{"name": "config", "val": ": dict[str, typing.Any]"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig']"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig']"}, {"name": "data_parallel_mode", "val": ": typing.Literal['none', 'unet', 'transformer', 'all']"}, {"name": "scheduler", "val": ": diffusers.schedulers.scheduling_utils.SchedulerMixin | None"}, {"name": "vae_decoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeDecoder"}, {"name": "text_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "text_encoder_2", "val": ": torch.jit._script.ScriptModule | NeuronModelTextEncoder | None = None"}, {"name": "unet", "val": ": torch.jit._script.ScriptModule | NeuronModelUnet | None = None"}, {"name": "transformer", "val": ": torch.jit._script.ScriptModule | NeuronModelTransformer | None = None"}, {"name": "vae_encoder", "val": ": torch.jit._script.ScriptModule | NeuronModelVaeEncoder | None = None"}, {"name": "image_encoder", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "safety_checker", "val": ": torch.jit._script.ScriptModule | None = None"}, {"name": "tokenizer", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | transformers.models.t5.tokenization_t5.T5Tokenizer | None = None"}, {"name": "tokenizer_2", "val": ": transformers.models.clip.tokenization_clip.CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor | None = None"}, {"name": "controlnet", "val": ": torch.jit._script.ScriptModule | list[torch.jit._script.ScriptModule]| NeuronControlNetModel | NeuronMultiControlNetModel | None = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_and_config_save_paths", "val": ": dict[str, tuple[str, pathlib.Path]] | None = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.neuron.NeuronStableDiffusionXLInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling_diffusion.py#L1106</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div></div>

Are there any other diffusion features that you want us to support in 🤗`Optimum-neuron`? Please file an issue to [`Optimum-neuron` Github repo](https://github.com/huggingface/optimum-neuron) or discuss with us on [HuggingFace’s community forum](https://discuss.huggingface.co/c/optimum/), cheers 🤗 !

### YOLOS
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/yolos.md

# YOLOS

## Overview

The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.

## Export to Neuron

To deploy 🤗 [Transformers](https://huggingface.co/docs/transformers/index) models on Neuron devices, you first need to compile the models and export them to a serialized format for inference. Below are two approaches to compile the model, you can choose the one that best suits your needs. Here we take the `feature-extraction` as an example:

### Option 1: CLI

You can export the model using the Optimum command-line interface as follows:

```bash
optimum-cli export neuron --model hustvl/yolos-tiny --task object-detection --batch_size 1 yolos_object_detection_neuronx/
```

> [!TIP]
> Execute `optimum-cli export neuron --help` to display all command line options and their description.

### Option 2: Python API

```python
from optimum.neuron import NeuronModelForObjectDetection
from transformers import AutoImageProcessor


preprocessor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
neuron_model = NeuronModelForObjectDetection.from_pretrained("hustvl/yolos-tiny", export=True, batch_size=1)

neuron_model.save_pretrained("yolos_object_detection_neuronx")
neuron_model.push_to_hub(
    "yolos_object_detection_neuronx", repository_id="optimum/yolos-tiny-neuronx-bs1"  # Replace with your HF Hub repo id
)
```

## NeuronYolosForObjectDetection[[optimum.neuron.NeuronYolosForObjectDetection]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronYolosForObjectDetection</name><anchor>optimum.neuron.NeuronYolosForObjectDetection</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/yolos/modeling_yolos.py#L43</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with object detection heads on top, for tasks such as COCO detection.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronYolosForObjectDetection.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/yolos/modeling_yolos.py#L53</source><parameters>[{"name": "pixel_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronYolosForObjectDetection` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronYolosForObjectDetection.forward.example">

Example:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronYolosForObjectDetection
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("optimum/yolos-tiny-neuronx-bs1")
>>> model = NeuronYolosForObjectDetection.from_pretrained("optimum/yolos-tiny-neuronx-bs1")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0]
```

</ExampleCodeBlock>


</div></div>

### BERT
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/bert.md

# BERT

## Overview


[BERT](https://huggingface.co/papers/1810.04805) is a bidirectional transformer pretrained on unlabeled text to predict masked tokens in a sentence and to predict whether one sentence follows another. The main idea is that by randomly masking some tokens, the model can train on text to the left and right, giving it a more thorough understanding. BERT is also very versatile because its learned language representations can be adapted for other NLP tasks by fine-tuning an additional layer or head.

You can find all the original BERT checkpoints under the [BERT](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc) collection.

## Export to Neuron

To deploy 🤗 [Transformers](https://huggingface.co/docs/transformers/index) models on Neuron devices, you first need to compile the models and export them to a serialized format for inference. Below are two approaches to compile the model, you can choose the one that best suits your needs. Here we take the `feature-extraction` as an example:

### Option 1: CLI

You can export the model using the Optimum command-line interface as follows:

```bash
optimum-cli export neuron --model google-bert/bert-base-uncased --task feature-extraction --batch_size 1 --sequence_length 128 bert_feature_extraction_neuronx/
```

> [!TIP]
> Execute `optimum-cli export neuron --help` to display all command line options and their description.

### Option 2: Python API

```python
from optimum.neuron import NeuronModelForFeatureExtraction

input_shapes = {"batch_size": 1, "sequence_length": 128}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
neuron_model = NeuronModelForFeatureExtraction.from_pretrained(
    "google-bert/bert-base-uncased",
    export=True,
    **input_shapes,
    **compiler_args,
)
# Save locally
neuron_model.save_pretrained("bert_feature_extraction_neuronx")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "bert_feature_extraction_neuronx", repository_id="my-neuron-repo"  # Replace with your HF Hub repo id
)
```

## NeuronBertModel[[optimum.neuron.NeuronBertModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertModel</name><anchor>optimum.neuron.NeuronBertModel</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L62</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Bare Bert Model transformer outputting raw hidden-states without any specific head on top, used for the task "feature-extraction".

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertModel.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L65</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertModel` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertModel.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertModel

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased-neuronx-bs1-sq128")
>>> model = NeuronBertModel.from_pretrained("optimum/bert-base-uncased-neuronx-bs1-sq128")

>>> inputs = tokenizer("Dear Evan Hansen is the winner of six Tony Awards.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 13, 384]
```

</ExampleCodeBlock>


</div></div>

## NeuronBertForMaskedLM[[optimum.neuron.NeuronBertForMaskedLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertForMaskedLM</name><anchor>optimum.neuron.NeuronBertForMaskedLM</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L110</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Masked language Bert Model with a `language modeling` head on top, for masked language modeling tasks on Neuron devices.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertForMaskedLM.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L113</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertForMaskedLM` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertForMaskedLM.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertForMaskedLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/legal-bert-base-uncased-neuronx")
>>> model = NeuronBertForMaskedLM.from_pretrained("optimum/legal-bert-base-uncased-neuronx")

>>> inputs = tokenizer("This [MASK] Agreement is between General Motors and John Murray.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 13, 30522]
```

</ExampleCodeBlock>


</div></div>

## NeuronBertForSequenceClassification[[optimum.neuron.NeuronBertForSequenceClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertForSequenceClassification</name><anchor>optimum.neuron.NeuronBertForSequenceClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L197</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertForSequenceClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L200</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertForSequenceClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertForSequenceClassification.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-multilingual-uncased-sentiment-neuronx")
>>> model = NeuronBertForSequenceClassification.from_pretrained("optimum/bert-base-multilingual-uncased-sentiment-neuronx")

>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]
```

</ExampleCodeBlock>


</div></div>

## NeuronBertForTokenClassification[[optimum.neuron.NeuronBertForTokenClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertForTokenClassification</name><anchor>optimum.neuron.NeuronBertForTokenClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L240</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertForTokenClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L243</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertForTokenClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertForTokenClassification.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER-neuronx")
>>> model = NeuronBertForTokenClassification.from_pretrained("optimum/bert-base-NER-neuronx")

>>> inputs = tokenizer("Lin-Manuel Miranda is an American songwriter, actor, singer, filmmaker, and playwright.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 20, 9]
```

</ExampleCodeBlock>


</div></div>

## NeuronBertForQuestionAnswering[[optimum.neuron.NeuronBertForQuestionAnswering]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertForQuestionAnswering</name><anchor>optimum.neuron.NeuronBertForQuestionAnswering</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L153</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Bert with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute `span start logits` and `span end logits`).

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertForQuestionAnswering.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L156</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertForQuestionAnswering` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertForQuestionAnswering.forward.example">

Example:

```python
>>> import torch
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-cased-squad2-neuronx")
>>> model = NeuronBertForQuestionAnswering.from_pretrained("optimum/bert-base-cased-squad2-neuronx")

>>> question, text = "Are there wheelchair spaces in the theatres?", "Yes, we have reserved wheelchair spaces with a good view."
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([12])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
```

</ExampleCodeBlock>


</div></div>

## NeuronBertForMultipleChoice[[optimum.neuron.NeuronBertForMultipleChoice]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronBertForMultipleChoice</name><anchor>optimum.neuron.NeuronBertForMultipleChoice</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L284</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronBertForMultipleChoice.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/bert/modeling_bert.py#L287</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, num_choices, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, num_choices, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, num_choices, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronBertForMultipleChoice` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronBertForMultipleChoice.forward.example">

Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronBertForMultipleChoice

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-cased-swag-neuronx")
>>> model = NeuronBertForMultipleChoice.from_pretrained("optimum/bert-base-cased-swag-neuronx")

>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
...     "A drum line passes by walking down the street playing their instruments.",
...     "A drum line has heard approaching them.",
...     "A drum line arrives and they're outside dancing and asleep.",
...     "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)

# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
...     inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> logits.shape
[1, 4]
```

</ExampleCodeBlock>


</div></div>

### Whisper
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/whisper.md

# Whisper

## Overview

[Whisper](https://hf.co/papers/2212.04356) is a encoder-decoder (sequence-to-sequence) transformer pretrained on 680,000 hours of labeled audio data. This amount of pretraining data enables zero-shot performance on audio tasks in English and many other languages. The decoder allows Whisper to map the encoders learned speech representations to useful outputs, such as text, without additional fine-tuning. Whisper just works out of the box.

You can find all the original Whisper checkpoints under the [Whisper](https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013) collection.


## Export to Neuron

To deploy 🤗 [Transformers](https://huggingface.co/docs/transformers/index) models on Neuron devices, you first need to compile the models and export them to a serialized format for inference. Below are two approaches to compile the model, you can choose the one that best suits your needs:

### Option 1: CLI

You can export the model using the Optimum command-line interface as follows:

```bash
optimum-cli export neuron --model openai/whisper-tiny --task automatic-speech-recognition --batch_size 1 --sequence_length 128 --auto_cast all --auto_cast_type bf16 whisper_tiny_neuronx/
```

> [!TIP]
> Execute `optimum-cli export neuron --help` to display all command line options and their description.

### Option 2: Python API

```python
from optimum.neuron import NeuronWhisperForConditionalGeneration

compiler_args = {"auto_cast": "all", "auto_cast_type": "bf16"}
input_shapes = {"batch_size": 1, "sequence_length": 128}
neuron_model = NeuronWhisperForConditionalGeneration.from_pretrained(
    "openai/whisper-tiny",
    export=True,
    inline_weights_to_neff=False,
    **compiler_args,
    **input_shapes,
)
# Save locally
neuron_model.save_pretrained("whisper_tiny_neuronx")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "whisper_tiny_neuronx", repository_id="my-neuron-repo"  # Replace with your repo id, eg. "Jingya/whisper_tiny_neuronx"
)
```

## Usage Example

To use the model that we just exported, there are two options. We can eithe use the [NeuronWhisperForConditionalGeneration](/docs/optimum.neuron/v0.4.0/en/model_doc/transformers/whisper#optimum.neuron.NeuronWhisperForConditionalGeneration) class or use the `Pipeline`. The example below demonstrates how to automatically transcribe speech into text these two approaches.

### With `NeuronWhisperForConditionalGeneration`

```python
from datasets import load_dataset
from transformers import AutoProcessor
from optimum.neuron import NeuronWhisperForConditionalGeneration

# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]

# Use the model and processor to transcribe the audio:
processor = AutoProcessor.from_pretrained("Jingya/whisper_tiny_neuronx")
input_features = processor(
    audio_sample["array"], sampling_rate=audio_sample["sampling_rate"], return_tensors="pt"
).input_features

# Inference
neuron_model = NeuronWhisperForConditionalGeneration.from_pretrained("Jingya/whisper_tiny_neuronx")
predicted_ids = neuron_model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
#  Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.
```

### With `pipeline`

```python
from transformers import AutoProcessor
from optimum.neuron import NeuronWhisperForConditionalGeneration, pipeline

processor = AutoProcessor.from_pretrained("Jingya/whisper_tiny_neuronx")
neuron_model = NeuronWhisperForConditionalGeneration.from_pretrained("Jingya/whisper_tiny_neuronx")

pipeline = pipeline(
    task="automatic-speech-recognition",
    model=neuron_model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
)
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
#  I have a dream. Good one day. This nation will rise up. Live out the true meaning of its dream.
```

## NeuronWhisperForConditionalGeneration[[optimum.neuron.NeuronWhisperForConditionalGeneration]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronWhisperForConditionalGeneration</name><anchor>optimum.neuron.NeuronWhisperForConditionalGeneration</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/whisper/modeling_whisper.py#L132</source><parameters>[{"name": "encoder", "val": ": ScriptModule"}, {"name": "decoder", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "encoder_file_name", "val": ": str | None = 'model.neuron'"}, {"name": "decoder_file_name", "val": ": str | None = 'model.neuron'"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_configs", "val": ": dict[str, 'NeuronDefaultConfig'] | None = None"}, {"name": "configs", "val": ": dict[str, 'PretrainedConfig'] | None = None"}, {"name": "generation_config", "val": ": transformers.generation.configuration_utils.GenerationConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **encoder** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module of the encoder with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.
- **decoder** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module of the decoder with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.
- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

Whisper Neuron model with a language modeling head that can be used for automatic speech recognition.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronWhisperForConditionalGeneration.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/whisper/modeling_whisper.py#L190</source><parameters>[{"name": "input_features", "val": ": torch.FloatTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "encoder_outputs", "val": ": tuple[torch.FloatTensor] | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_features** (`torch.FloatTensor | None` of shape `(batch_size, feature_size, sequence_length)`) --
  Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
  loading a `.flac` or `.wav` audio file into an array of type `list[float]` or a `numpy.ndarray`, *e.g.* via
  the soundfile library (`pip install soundfile`). To prepare the array into `input_features`, the
  `AutoFeatureExtractor` should be used for extracting the mel features, padding and conversion into a
  tensor of type `torch.FloatTensor`. See `~WhisperFeatureExtractor.__call__`
- **decoder_input_ids** (`torch.LongTensor | None` of shape `(batch_size, max_sequence_length)`) --
  Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using `WhisperTokenizer`.
  See `PreTrainedTokenizer.encode` and `PreTrainedTokenizer.__call__` for details. Since the cache is not yet
  supported for Whisper, it needs to be padded to the `sequence_length` used for the compilation.
- **encoder_outputs** (`tuple[torch.FloatTensor | None]`) --
  Tuple consists of `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`) is a sequence of
  hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronWhisperForConditionalGeneration` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.



</div></div>

### CLIP
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/transformers/clip.md

# CLIP

## Overview

The CLIP model was proposed in [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing
for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

## Export to Neuron

To deploy 🤗 [Transformers](https://huggingface.co/docs/transformers/index) models on Neuron devices, you first need to compile the models and export them to a serialized format for inference. Below are two approaches to compile the model, you can choose the one that best suits your needs. Here we take the `feature-extraction` as an example:

### Option 1: CLI

You can export the model using the Optimum command-line interface as follows:

```bash
optimum-cli export neuron --model openai/clip-vit-base-patch32 --task feature-extraction --text_batch_size 2 --sequence_length 77 --image_batch_size 1 --num_channels 3 --width 224 --height 224 clip_feature_extraction_neuronx/
```

> [!TIP]
> Execute `optimum-cli export neuron --help` to display all command line options and their description.

### Option 2: Python API

```python
from optimum.neuron import NeuronCLIPModel

input_shapes = {"text_batch_size": 2, "sequence_length": 77, "image_batch_size": 1, "num_channels": 3, "width": 224, "height": 224}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
neuron_model = NeuronCLIPModel.from_pretrained(
    "openai/clip-vit-base-patch32",
    export=True,
    **input_shapes,
    **compiler_args,
)
# Save locally
neuron_model.save_pretrained("clip_feature_extraction_neuronx/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "clip_feature_extraction_neuronx/", repository_id="optimum/clip-vit-base-patch32-neuronx"  # Replace with your HF Hub repo id
)
```

## NeuronCLIPModel[[optimum.neuron.NeuronCLIPModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronCLIPModel</name><anchor>optimum.neuron.NeuronCLIPModel</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/clip/modeling_clip.py#L50</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Bare CLIP Model without any specific head on top, used for the task "feature-extraction".

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronCLIPModel.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/clip/modeling_clip.py#L53</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "pixel_values", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronCLIPModel` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronCLIPModel.forward.example">

Example:

```python
>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronCLIPModel

>>> processor = AutoProcessor.from_pretrained("optimum/clip-vit-base-patch32-neuronx")
>>> model = NeuronCLIPModel.from_pretrained("optimum/clip-vit-base-patch32-neuronx")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1)
```

</ExampleCodeBlock>


</div></div>

## NeuronCLIPForImageClassification[[optimum.neuron.NeuronCLIPForImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronCLIPForImageClassification</name><anchor>optimum.neuron.NeuronCLIPForImageClassification</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/clip/modeling_clip.py#L102</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

CLIP vision encoder with an image classification head on top (a linear layer on top of the pooled final hidden states of the patch tokens) e.g. for ImageNet.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronCLIPForImageClassification.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/models/inference/clip/modeling_clip.py#L112</source><parameters>[{"name": "pixel_values", "val": ": Tensor"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.Tensor | None` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoImageProcessor`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoImageProcessor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronCLIPForImageClassification` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronCLIPForImageClassification.forward.example">

Example:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronCLIPForImageClassification
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("optimum/clip-vit-base-patch32-image-classification-neuronx")
>>> model = NeuronCLIPForImageClassification.from_pretrained("optimum/clip-vit-base-patch32-image-classification-neuronx")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_label = logits.argmax(-1).item()
```

</ExampleCodeBlock>


</div></div>

### Sentence Transformers 🤗
https://huggingface.co/docs/optimum.neuron/v0.4.0/model_doc/sentence_transformers/overview.md

# Sentence Transformers 🤗

[SentenceTransformers 🤗](https://sbert.net/) is a Python framework for state-of-the-art sentence, text and image embeddings. It can be used to compute embeddings using Sentence Transformer models or to calculate similarity scores using Cross-Encoder (a.k.a. reranker) models. This unlocks a wide range of applications, including semantic search, semantic textual similarity, and paraphrase mining. Optimum Neuron offer APIs to ease the use of SentenceTransformers on AWS Neuron devices.

## Export to Neuron

### Option 1: CLI

* Example - Text embeddings

```bash
optimum-cli export neuron -m BAAI/bge-large-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_neuron/
```

* Example - Image Search

```bash
optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb_neuron/
```

### Option 2: Python API

* Example - Text embeddings

```python
from optimum.neuron import NeuronModelForSentenceTransformers

# configs for compiling model
input_shapes = {
    "batch_size": 1,
    "sequence_length": 384,
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
    "BAAI/bge-large-en-v1.5",
    export=True,
    **input_shapes,
    **compiler_args,
)

# Save locally
neuron_model.save_pretrained("bge_emb_neuron/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "bge_emb_neuron/", repository_id="optimum/bge-base-en-v1.5-neuronx"  # Replace with your HF Hub repo id
)

```

* Example - Image Search

```python
from optimum.neuron import NeuronModelForSentenceTransformers

# configs for compiling model
input_shapes = {
    "num_channels": 3,
    "height": 224,
    "width": 224,
    "text_batch_size": 3,
    "image_batch_size": 1,
    "sequence_length": 64,
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
    "sentence-transformers/clip-ViT-B-32",
    subfolder="0_CLIPModel",
    export=True,
    dynamic_batch_size=False,
    **input_shapes,
    **compiler_args,
)

# Save locally
neuron_model.save_pretrained("clip_emb_neuron/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "clip_emb_neuron/", repository_id="optimum/clip_vit_emb_neuronx"  # Replace with your HF Hub repo id
)
```

## NeuronModelForSentenceTransformers[[optimum.neuron.NeuronModelForSentenceTransformers]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.neuron.NeuronModelForSentenceTransformers</name><anchor>optimum.neuron.NeuronModelForSentenceTransformers</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L144</source><parameters>[{"name": "model", "val": ": ScriptModule"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "model_save_dir", "val": ": str | pathlib.Path | tempfile.TemporaryDirectory | None = None"}, {"name": "model_file_name", "val": ": str | None = None"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "neuron_config", "val": ": NeuronDefaultConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`transformers.PretrainedConfig`) -- [PretrainedConfig](https://huggingface.co/docs/transformers/main_classes/configuration#transformers.PretrainedConfig) is the Model configuration class with all the parameters of the model.
  Initializing with a config file does not load the weights associated with the model, only the
  configuration. Check out the `optimum.neuron.modeling.NeuronTracedModel.from_pretrained` method to load the model weights.
- **model** (`torch.jit._script.ScriptModule`) -- [torch.jit._script.ScriptModule](https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html) is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Neuron Model for Sentence Transformers.

This model inherits from `~neuron.modeling.NeuronTracedModel`. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)



Sentence Transformers model on Neuron devices.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.neuron.NeuronModelForSentenceTransformers.forward</anchor><source>https://github.com/huggingface/optimum-neuron/blob/v0.4.0/optimum/neuron/modeling.py#L152</source><parameters>[{"name": "input_ids", "val": ": Tensor"}, {"name": "attention_mask", "val": ": Tensor"}, {"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`torch.Tensor | None` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `NeuronModelForSentenceTransformers` forward method, overrides the `__call__` special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.


<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSentenceTransformers.forward.example">

Text Example:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bge-base-en-v1.5-neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/bge-base-en-v1.5-neuronx")

>>> inputs = tokenizer("In the smouldering promise of the fall of Troy, a mythical world of gods and mortals rises from the ashes.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> token_embeddings = outputs.token_embeddings
>>> sentence_embedding = = outputs.sentence_embedding
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.neuron.NeuronModelForSentenceTransformers.forward.example-2">

Image Example:

```python
>>> from PIL import Image
>>> from transformers import AutoProcessor
>>> from sentence_transformers import util
>>> from optimum.neuron import NeuronModelForSentenceTransformers

>>> processor = AutoProcessor.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> model = NeuronModelForSentenceTransformers.from_pretrained("optimum/clip_vit_emb_neuronx")
>>> util.http_get("https://github.com/UKPLab/sentence-transformers/raw/master/examples/sentence_transformer/applications/image-search/two_dogs_in_snow.jpg", "two_dogs_in_snow.jpg")
>>> inputs = processor(
>>>     text=["Two dogs in the snow", 'A cat on a table', 'A picture of London at night'], images=Image.open("two_dogs_in_snow.jpg"), return_tensors="pt", padding=True
>>> )

>>> outputs = model(**inputs)
>>> cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds)  # Compute cosine similarities
```

</ExampleCodeBlock>


</div></div>

### Create your own chatbot with llama-2-13B on AWS Inferentia
https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/llama2-13b-chatbot.md

# Create your own chatbot with llama-2-13B on AWS Inferentia

*There is a notebook version of that tutorial [here](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/text-generation/llama2-13b-chatbot.ipynb)*.

This guide will detail how to export, deploy and run a **LLama-2 13B** chat model on AWS inferentia.

You will learn how to:
- export the Llama-2 model to the Neuron format,
- push the exported model to the Hugging Face Hub,
- deploy the model and use it in a chat application.

Note: This tutorial was created on a inf2.48xlarge AWS EC2 Instance.

## 1. Export the Llama 2 model to Neuron

For this guide, we will use the non-gated [NousResearch/Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) model, which is functionally equivalent to the original [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).

This model is part of the **Llama 2** family of models, and has been tuned to recognize chat interactions
between a *user* and an *assistant* (more on that later).

As explained in the [optimum-neuron documentation](https://huggingface.co/docs/optimum-neuron/guides/export_model#exporting-llm-models-to-neuron)
, models need to be compiled and exported to a serialized format before running them on Neuron devices.

When exporting the model, we will specify two sets of parameters:

- using *compiler_args*, we specify on how many cores we want the model to be deployed (each neuron device has two cores), and with which precision (here *float16*),
- using *input_shapes*, we set the static input and output dimensions of the model. All model compilers require static shapes, and neuron makes no exception. Note that the
*sequence_length* not only constrains the length of the input context, but also the length of the Key/Value cache, and thus, the output length.

Depending on your choice of parameters and inferentia host, this may take from a few minutes to more than an hour.

For your convenience, we host a pre-compiled version of that model on the Hugging Face hub, so you can skip the export and start using the model immediately in section 2.


```python
from optimum.neuron import NeuronModelForCausalLM

compiler_args = {"tensor_parallel_size": 24, "auto_cast_type": 'fp16'}
input_shapes = {"batch_size": 1, "sequence_length": 2048}
model = NeuronModelForCausalLM.from_pretrained(
        "NousResearch/Llama-2-13b-chat-hf",
        export=True,
        **compiler_args,
        **input_shapes)
```

This will probably take a while.

Fortunately, you will need to do this only once because you can save your model and reload it later.


```python
model.save_pretrained("llama-2-13b-chat-neuron")
```

Even better, you can push it to the [Hugging Face hub](https://huggingface.co/models).

For that, you need to be logged in to a [HuggingFace account](https://huggingface.co/join).

In the terminal, just type the following command and paste your Hugging Face token when requested:

```shell
huggingface-cli login
```

By default, the model will be uploaded to your account (organization equal to your user name).

Feel free to edit the code below if you want to upload the model to a specific [Hugging Face organization](https://huggingface.co/docs/hub/organizations).


```python
from huggingface_hub import whoami

org = whoami()['name']

repo_id = f"{org}/llama-2-13b-chat-neuron"

model.push_to_hub("llama-2-13b-chat-neuron", repository_id=repo_id)
```

### A few more words about export parameters.

The minimum memory required to load a model can be computed with:

```
   memory = bytes per parameter * number of parameters
```

The **Llama 2 13B** model uses *float16* weights (stored on 2 bytes) and has 13 billion parameters, which means it requires at least 2 * 13B or ~26GB of memory to store its weights.

Each NeuronCore has 16GB of memory which means that a 26GB model cannot fit on a single NeuronCore.

In reality, the total space required is much greater than just the number of parameters due to caching attention layer projections (KV caching).
This caching mechanism grows memory allocations linearly with sequence length and batch size.

Here we set the *batch_size* to 1, meaning that we can only process one input prompt in parallel. We set the *sequence_length* to 2048, which corresponds to half the model maximum capacity (4096).

The formula to evaluate the size of the KV cache is more involved as it also depends on parameters related to the model architecture, such as the width of the embeddings and the number of decoder blocks.

Bottom-line is, to get very large language models to fit, tensor parallelism is used to split weights, data, and compute across multiple NeuronCores, keeping in mind that the memory on each core cannot exceed 16GB.

Note that increasing the number of cores beyond the minimum requirement almost always results in a faster model.
Increasing the tensor parallelism degree improves memory bandwidth which improves model performance.

To optimize performance it's recommended to use all cores available on the instance.

In this guide we use all the 24 cores of the *inf2.48xlarge*, but this should be changed to 12 if you are
using a *inf2.24xlarge* instance.

## 2. Generate text using Llama 2 on AWS Inferentia2

Once your model has been exported, you can generate text using the transformers library, as it has been described in [detail in this post](https://huggingface.co/blog/how-to-generate).

If as suggested you skipped the first section, don't worry: we will use a precompiled model already present on the hub instead.


```python
from optimum.neuron import NeuronModelForCausalLM

try:
    model
except NameError:
    # Edit this to use another base model
    model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-13b-chat-hf-neuron-latency')
```

We will need a *Llama 2* tokenizer to convert the prompt strings to text tokens.


```python
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("NousResearch/Llama-2-13b-chat-hf")
```

The following generation strategies are supported:

- greedy search,
- multinomial sampling with top-k and top-p (with temperature).

Most logits pre-processing/filters (such as repetition penalty) are supported.


```python
inputs = tokenizer("What is deep-learning ?", return_tensors="pt")
outputs = model.generate(**inputs,
                         max_new_tokens=128,
                         do_sample=True,
                         temperature=0.9,
                         top_k=50,
                         top_p=0.9)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```

## 3. Create a chat application using llama on AWS Inferentia2

We specifically selected a **Llama 2** chat variant to illustrate the excellent behaviour of the exported model when the length of the encoding context grows.

The model expects the prompts to be formatted following a specific template corresponding to the interactions between a *user* role and an *assistant* role.

Each chat model has its own convention for encoding such contents, and we will not go into too much details in this guide, because we will directly use the [Hugging Face chat templates](https://huggingface.co/blog/chat-templates) corresponding to our model.

The utility function below converts a list of exchanges between the user and the model into a well-formatted chat prompt.


```python
def format_chat_prompt(message, history, max_tokens):
    """ Convert a history of messages to a chat prompt


    Args:
        message(str): the new user message.
        history (List[str]): the list of user messages and assistant responses.
        max_tokens (int): the maximum number of input tokens accepted by the model.

    Returns:
        a `str` prompt.
    """
    chat = []
    # Convert all messages in history to chat interactions
    for interaction in history:
        chat.append({"role": "user", "content" : interaction[0]})
        chat.append({"role": "assistant", "content" : interaction[1]})
    # Add the new message
    chat.append({"role": "user", "content" : message})
    # Generate the prompt, verifying that we don't go beyond the maximum number of tokens
    for i in range(0, len(chat), 2):
        # Generate candidate prompt with the last n-i entries
        prompt = tokenizer.apply_chat_template(chat[i:], tokenize=False)
        # Tokenize to check if we're over the limit
        tokens = tokenizer(prompt)
        if len(tokens.input_ids) <= max_tokens:
            # We're good, stop here
            return prompt
    # We shall never reach this line
    raise SystemError
```

We are now equipped to build a simplistic chat application.

We simply store the interactions between the user and the assistant in a list that we use to generate
the input prompt.


```python
history = []
max_tokens = 1024

def chat(message, history, max_tokens):
    prompt = format_chat_prompt(message, history, max_tokens)
    # Uncomment the line below to see what the formatted prompt looks like
    #print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs,
                             max_length=2048,
                             do_sample=True,
                             temperature=0.9,
                             top_k=50,
                             repetition_penalty=1.2)
    # Do not include the input tokens
    outputs = outputs[0, inputs.input_ids.size(-1):]
    response = tokenizer.decode(outputs, skip_special_tokens=True)
    history.append([message, response])
    return response
```

To test the chat application you can use for instance the following sequence of prompts:

```python
print(chat("My favorite color is blue. My favorite fruit is strawberry.", history, max_tokens))
print(chat("Name a fruit that is on my favorite colour.", history, max_tokens))
print(chat("What is the colour of my favorite fruit ?", history, max_tokens))
```

<Warning>

While very powerful, Large language models can sometimes *hallucinate*. We call *hallucinations* generated content that is irrelevant or made-up but presented by the model as if it was accurate. This is a flaw of LLMs and is not a side effect of using them on Trainium / Inferentia.

</Warning>

### Deploy Llama 3.3 70B on AWS Inferentia2
https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/deploy-llama-3-3-70b.md

# Deploy Llama 3.3 70B on AWS Inferentia2

In this tutorial you will learn how to deploy [/meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) model on AWS Inferentia2 with Hugging Face Optimum on Amazon SageMaker. We are going to use the Hugging Face TGI Neuron Container, a purpose-built Inference Container to easily deploy LLMs on AWS Inferentia2 powered by[ Text Generation Inference](https://huggingface.co/docs/text-generation-inference/index) and [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index).


We will cover how to:
1. [Setup development environment](#1-setup-development-environment)
2. [Retrieve the new Hugging Face TGI Neuron DLC](#2-retrieve-the-new-hugging-face-tgi-neuron-dlc)
3. [Deploy Llama 3.3 70B to inferentia2](#3-deploy-llama-33-70b-to-inferentia2)
4. [Clean up](#4-clean-up)

Lets get started! 🚀

[AWS inferentia (Inf2)](https://aws.amazon.com/ec2/instance-types/inf2/) are purpose-built EC2 for deep learning (DL) inference workloads. Here are the different instances of the Inferentia2 family.

| instance size | accelerators | Neuron Cores | accelerator memory | vCPU | CPU Memory | on-demand price ($/h) |
| ------------- | ------------ | ------------ | ------------------ | ---- | ---------- | --------------------- |
| inf2.xlarge   | 1            | 2            | 32                 | 4    | 16         | 0.76                  |
| inf2.8xlarge  | 1            | 2            | 32                 | 32   | 128        | 1.97                  |
| inf2.24xlarge | 6            | 12           | 192                | 96   | 384        | 6.49                  |
| inf2.48xlarge | 12           | 24           | 384                | 192  | 768        | 12.98                 |


## 1. Setup development environment

For this tutorial, we are going to use a Notebook Instance in Amazon SageMaker with the Python 3 (ipykernel) and the `sagemaker` python SDK to deploy Llama 3.3 70B to a SageMaker inference endpoint.

Make sur you have the latest version of the SageMaker SDK installed.

```python
!pip install sagemaker --upgrade --quiet
```

Then, instantiate the sagemaker role and session.

```python
import sagemaker
import boto3

sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client("iam")
    role = iam.get_role(RoleName="sagemaker_execution_role")["Role"]["Arn"]

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Retrieve the latest Hugging Face TGI Neuron DLC

The latest Hugging Face TGI Neuron DLCs can be used to run inference on AWS Inferentia2. You can use the `ecr.image_uri` function from Optimum Neuron to retrieve the appropriate Hugging Face TGI Neuron DLC URI based on your desired `region` and `version`. Default values can be deduced by your AWS credentials.

```python
from optimum.neuron.utils import ecr

llm_image = ecr.image_uri("tgi")
# print image uri
print(f"llm image uri: {llm_image}")
```

## 3. Deploy Llama 3.3 70B to Inferentia2

At the time of writing, [AWS Inferentia2 does not support dynamic shapes for inference](https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/arch/neuron-features/dynamic-shapes.html#neuron-dynamic-shapes), which means that we need to specify our sequence length and batch size ahead of time.
To make it easier for customers to utilize the full power of Inferentia2, we created a [neuron model cache](https://huggingface.co/docs/optimum-neuron/guides/cache_system), which contains pre-compiled configurations for the most popular LLMs, including Llama 3.3 70B. 

This means we don't need to compile the model ourselves, but we can use the pre-compiled model from the cache. You can find compiled/cached configurations on the [Hugging Face Hub](https://huggingface.co/aws-neuron/optimum-neuron-cache/tree/main/inference-cache-config). If your desired configuration is not yet cached, you can compile it yourself using the [Optimum CLI](https://huggingface.co/docs/optimum-neuron/guides/export_model) or open a request at the [Cache repository](https://huggingface.co/aws-neuron/optimum-neuron-cache/discussions).

**Deploying Llama 3.3 70B to a SageMaker Endpoint**  

Before deploying the model to Amazon SageMaker, we must define the TGI Neuron endpoint configuration. We need to make sure the following additional parameters are defined: 

- `HF_NUM_CORES`: Number of Neuron Cores used for the compilation.
- `HF_BATCH_SIZE`: The batch size that was used to compile the model.
- `HF_SEQUENCE_LENGTH`: The sequence length that was used to compile the model.
- `HF_AUTO_CAST_TYPE`: The auto cast type that was used to compile the model.

We still need to define traditional TGI parameters with:

- `HF_MODEL_ID`: The Hugging Face model ID.
- `HF_TOKEN`: The Hugging Face API token to access gated models.
- `MAX_BATCH_SIZE`: The maximum batch size that the model can handle, equal to the batch size used for compilation.
- `MAX_INPUT_TOKEN`: The maximum input length that the model can handle. 
- `MAX_TOTAL_TOKENS`: The maximum total tokens the model can generate, equal to the sequence length used for compilation.

Optionnaly, you can configure the endpoint to support chat templates:
- `MESSAGES_API_ENABLED`: Enable Messages API 

**Select the right instance type**

Llama 3.3 70B is a large model and requires a lot of memory. We are going to use the `inf2.48xlarge` instance type, which has 192 vCPUs and 384 GB of accelerator memory. The `inf2.48xlarge` instance comes with 12 Inferentia2 accelerators that include 24 Neuron Cores. If you want to find the cached configurations for Llama 3.3 70B, you can find them [here](https://huggingface.co/aws-neuron/optimum-neuron-cache/blob/main/inference-cache-config/llama3-70b.json#L16). In our case we will use a batch size of 4 and a sequence length of 4096. 


Before we can deploy Llama 3.3 70B to Inferentia2, we need to make sure we have the necessary permissions to access the model. You can request access to the model [here](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and create a User access token following this [guide](https://huggingface.co/docs/hub/en/security-tokens).


After that we can create our endpoint configuration and deploy the model to Amazon SageMaker. We will deploy the endpoint with the Messages API enabled, so that it is fully compatible with the OpenAI Chat Completion API.

```python
from sagemaker.huggingface import HuggingFaceModel

# sagemaker config
instance_type = "ml.inf2.48xlarge"
health_check_timeout = 3600  # additional time to load the model
volume_size = 512  # size in GB of the EBS volume

# Define Model and Endpoint configuration parameter
config = {
    "HF_MODEL_ID": "meta-llama/Meta-Llama-3-70B-Instruct",
    "HF_NUM_CORES": "24",  # number of neuron cores
    "HF_AUTO_CAST_TYPE": "bf16",  # dtype of the model
    "MAX_BATCH_SIZE": "4",  # max batch size for the model
    "MAX_INPUT_TOKENS": "4000",  # max length of input text
    "MAX_TOTAL_TOKENS": "4096",  # max length of generated text
    "MESSAGES_API_ENABLED": "true",  # Enable the messages API
    "HF_TOKEN": "<REPLACE WITH YOUR TOKEN>",
}

assert config["HF_TOKEN"] != "<REPLACE WITH YOUR TOKEN>", (
    "Please replace '<REPLACE WITH YOUR TOKEN>' with your Hugging Face Hub API token"
)


# create HuggingFaceModel with the image uri
llm_model = HuggingFaceModel(role=role, image_uri=llm_image, env=config)
```

After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.inf2.48xlarge` instance type. TGI will automatically distribute and shard the model across all Inferentia devices.

```python
# deactivate warning since model is compiled
llm_model._is_compiled_model = True

llm = llm_model.deploy(
    initial_instance_count=1,
    instance_type=instance_type,
    container_startup_health_check_timeout=health_check_timeout,
    volume_size=volume_size,
)
```

SageMaker will now create our endpoint and deploy the model to it. It takes around 30 minutes for deployment.

After our endpoint is deployed we can run inference on it. We will use the `predict` method from the `predictor` to run inference on our endpoint. 

The endpoint supports the Messages API, which is fully compatible with the OpenAI Chat Completion API. The Messages API allows us to interact with the model in a conversational way. We can define the role of the message and the content. The role can be either `system`,`assistant` or `user`. The `system` role is used to provide context to the model and the `user` role is used to ask questions or provide input to the model.

Parameters can be defined as in the `parameters` attribute of the payload. Check out the chat completion [documentation](https://platform.openai.com/docs/api-reference/chat/create) to find supported parameters.

```json
{
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "What is deep learning?" }
  ]
}
```

```python
# Prompt to generate
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is deep learning in one sentence?"},
]

# Generation arguments https://platform.openai.com/docs/api-reference/chat/create
parameters = {
    "max_tokens": 100,
}
```

Okay lets test it.

```python
chat = llm.predict({"messages": messages, **parameters, "steam": True})

print(chat["choices"][0]["message"]["content"].strip())
```

## 4. Clean up

To clean up, we can delete the model and endpoint.

```python
llm.delete_model()
llm.delete_endpoint()
```

### Sentence Transformers on AWS Inferentia with Optimum Neuron
https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/sentence_transformers.md

# Sentence Transformers on AWS Inferentia with Optimum Neuron

## Text Models

_There is a notebook version of that tutorial [here](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/sentence-transformers/getting-started.ipynb)._

This guide explains how to compile, load, and use [Sentence Transformers (SBERT)](https://www.sbert.net/) models on AWS Inferentia2 with Optimum Neuron, enabling efficient calculation of embeddings. Sentence Transformers are powerful models for generating sentence embeddings. You can use this Sentence Transformers to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similarity, semantic search, or paraphrase mining.


### Convert Sentence Transformers model to AWS Inferentia2

First, you need to convert your Sentence Transformers model to a format compatible with AWS Inferentia2. You can compile Sentence Transformers models with Optimum Neuron using the `optimum-cli` or `NeuronModelForSentenceTransformers` class. Below you will find an example for both approaches. We have to make sure `sentence-transformers` is installed. That's only needed for exporting the model.

```bash
pip install sentence-transformers
```

Here we will use the `NeuronModelForSentenceTransformers`, which can be used to convert any Sentence Transformers model to a format compatible with AWS Inferentia2 or load already converted models. When exporting models with the `NeuronModelForSentenceTransformers` you need to set `export=True` and define the input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`.

```python
from optimum.neuron import NeuronModelForSentenceTransformers

# Sentence Transformers model from HuggingFace
model_id = "BAAI/bge-small-en-v1.5"
input_shapes = {"batch_size": 1, "sequence_length": 384}  # mandatory shapes

# Load Transformers model and export it to AWS Inferentia2
model = NeuronModelForSentenceTransformers.from_pretrained(model_id, export=True, **input_shapes)

# Save model to disk
model.save_pretrained("bge_emb_inf2/")
```

Here we will use the `optimum-cli` to convert the model. Similar to the `NeuronModelForSentenceTransformers` we need to define our input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`. The `optimum-cli` will automatically convert the model to a format compatible with AWS Inferentia2 and save it to the specified output directory.

```bash
optimum-cli export neuron -m BAAI/bge-small-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_inf2/
```

### Load compiled Sentence Transformers model and run inference

Once we have a compiled Sentence Transformers model, which we either exported ourselves or is available on the Hugging Face Hub, we can load it and run inference. For loading the model we can use the `NeuronModelForSentenceTransformers` class, which is an abstraction layer for the `SentenceTransformer` class. The `NeuronModelForSentenceTransformers` class will automatically pad the input to the specified `sequence_length` and run inference on AWS Inferentia2.

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from transformers import AutoTokenizer

model_id_or_path = "bge_emb_inf2/"
tokenizer_id = "BAAI/bge-small-en-v1.5"

# Load model and tokenizer
model = NeuronModelForSentenceTransformers.from_pretrained(model_id_or_path)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)

# Run inference
prompt = "I like to eat apples"
encoded_input = tokenizer(prompt, return_tensors='pt')
outputs = model(**encoded_input)

token_embeddings = outputs.token_embeddings
sentence_embedding = outputs.sentence_embedding

print(f"token embeddings: {token_embeddings.shape}") # torch.Size([1, 7, 384])
print(f"sentence_embedding: {sentence_embedding.shape}") # torch.Size([1, 384])
```

### Production Usage

For deploying these models in a production environment, refer to the [Amazon SageMaker Blog](https://www.philschmid.de/inferentia2-embeddings).


## CLIP


### Compile CLIP for AWS Inferentia2

You can compile CLIP models with Optimum Neuron either by using the `optimum-cli` or `NeuronModelForSentenceTransformers` class. Adopt one approach that you prefer:

* With the Optimum CLI

```bash
optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb/
```

* With the `NeuronModelForSentenceTransformers` class

```python
from optimum.neuron import NeuronModelForSentenceTransformers

model_id = "sentence-transformers/clip-ViT-B-32"

# configs for compiling model
input_shapes = {
    "num_channels": 3,
    "height": 224,
    "width": 224,
    "text_batch_size": 3,
    "image_batch_size": 1,
    "sequence_length": 64,
}

emb_model = NeuronModelForSentenceTransformers.from_pretrained(
    model_id, subfolder="0_CLIPModel", export=True, library_name="sentence_transformers", dynamic_batch_size=False, **input_shapes
)

# Save locally or upload to the HuggingFace Hub
save_directory = "clip_emb/"
emb_model.save_pretrained(save_directory)
```

### Load compiled Sentence Transformers model and run inference

```python
from PIL import Image
from sentence_transformers import util
from transformers import CLIPProcessor

from optimum.neuron import NeuronModelForSentenceTransformers

save_directory = "clip_emb"
emb_model = NeuronModelForSentenceTransformers.from_pretrained(save_directory)

processor = CLIPProcessor.from_pretrained(save_directory)
inputs = processor(
    text=["Two dogs in the snow", 'A cat on a table', 'A picture of London at night'], images=Image.open("two_dogs_in_snow.jpg"), return_tensors="pt", padding=True
)

outputs = emb_model(**inputs)


# Compute cosine similarities
cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds)
print(cos_scores)

# tensor([[0.3072, 0.1016, 0.1095]])
```

<Tip>

**Caveat**

Since compiled models with dynamic batching enabled only accept input tensors with the same batch size, we cannot set `dynamic_batch_size=True` if the input texts and images have different batch sizes. And as `NeuronModelForSentenceTransformers` class pads the inputs to the batch sizes (`text_batch_size` and `image_batch_size`) used during the compilation, you could use relatively larger batch sizes during the compilation for flexibility with the trade-off of compute.

eg. if you want to encode 3 or 4 or 5 texts and 1 image, you could set `text_batch_size = 5 = max(3, 4, 5)` and `image_batch_size = 1` during the compilation.

</Tip>

### Notebooks
https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/notebooks.md

# Notebooks

We prepared some notebooks for you, so that you can run directly tutorials in the documentation.

| Notebook                                                                                                                                                                                | Description                                                                                                                                                                       |        Studio Lab                                                                                                                                                                                                       |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [Create your own chatbot with llama-2-13B on AWS Inferentia](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/text-generation/llama2-13b-chatbot.ipynb)                | Show how to run LLama-2 13B chat model on AWS inferentia 2.                                                                                                                       | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-neuron/blob/main/notebooks/text-generation/llama2-13b-chatbot.ipynb)           |
| [How to generate images with Stable Diffusion](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/stable-diffusion/stable-diffusion-txt2img.ipynb)                       | Show how to use stable-diffusion v2.1 model to generate images from prompts on Inferentia 2.                                                                                      | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-neuron/blob/main/notebooks/stable-diffusion/stable-diffusion-txt2img.ipynb)    |
| [How to generate images with Stable Diffusion XL](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/stable-diffusion/stable-diffusion-xl-txt2img.ipynb)                 | Show how to use stable-diffusion XL model to generate images from prompts on Inferentia 2.                                                                                        | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-neuron/blob/main/notebooks/stable-diffusion/stable-diffusion-xl-txt2img.ipynb) |
| [Compute text embeddings with Sentence Transformers on Inferentia](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/sentence-transformers/getting-started.ipynb)       | Show how to use Sentence Transformers to compute sentence / text embeddings on Inferentia 2.                                                                                      | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-neuron/blob/main/notebooks/sentence-transformers/getting-started.ipynb)        |
| [How to compile (if needed) and generate text with CodeLlama 7B](https://github.com/huggingface/optimum-neuron/blob/main/notebooks/text-generation/CodeLlama-7B-Compilation.ipynb)      | How to use CodeLlama 7B to generate code. Also walks through compilation.                                                                                                         | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-neuron/blob/main/notebooks/text-generation/CodeLlama-7B-Compilation.ipynb)     |)

### Deploy Mixtral 8x7B on AWS Inferentia2
https://huggingface.co/docs/optimum.neuron/v0.4.0/inference_tutorials/deploy-mixtral-8x7b.md

# Deploy Mixtral 8x7B on AWS Inferentia2

Mixtral 8x7B is an open-source LLM from Mistral AI. It is a Sparse Mixture of Experts and has a similar architecture to Mistral 7B, but comes with a twist: it’s actually 8 “expert” models in one. If you want to learn more about MoEs check out [Mixture of Experts Explained](https://huggingface.co/blog/moe).

In this tutorial you will learn how to deploy [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model on AWS Inferentia2 with Hugging Face Optimum Neuron on Amazon SageMaker. We are going to use the Hugging Face TGI Neuron Container, a purpose-built Inference Container to easily deploy LLMs on AWS Inferentia2 powered by [Text Generation Inference](https://huggingface.co/docs/text-generation-inference/index) and [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/index).


We will cover how to:
1. [Setup a development environment](#1-setup-development-environment)
2. [Retrieve the latest Hugging Face TGI Neuron DLC](#2-retrieve-the-latest-hugging-face-tgi-neuron-dlc)
3. [Deploy Mixtral 8x7B to Inferentia2](#3-deploy-Mixtral-8x7B-to-inferentia2)
4. [Clean up](#4-clean-up)

Lets get started! 🚀

[AWS inferentia (Inf2)](https://aws.amazon.com/ec2/instance-types/inf2/) are purpose-built EC2 for deep learning (DL) inference workloads. Here are the different instances of the Inferentia2 family.

| instance size | accelerators | Neuron Cores | accelerator memory | vCPU | CPU Memory | on-demand price ($/h) |
| ------------- | ------------ | ------------ | ------------------ | ---- | ---------- | --------------------- |
| inf2.xlarge   | 1            | 2            | 32                 | 4    | 16         | 0.76                  |
| inf2.8xlarge  | 1            | 2            | 32                 | 32   | 128        | 1.97                  |
| inf2.24xlarge | 6            | 12           | 192                | 96   | 384        | 6.49                  |
| inf2.48xlarge | 12           | 24           | 384                | 192  | 768        | 12.98                 |


## 1. Setup development environment

For this tutorial, we are going to use a Notebook Instance in Amazon SageMaker with the Python 3 (ipykernel) and the `sagemaker` python SDK to deploy Mixtral 8x7B to a SageMaker inference endpoint.

Make sur you have the latest version of the SageMaker SDK installed.

```python
!pip install sagemaker --upgrade --quiet
```

Then, instantiate the sagemaker role and session.

```python
import sagemaker
import boto3

sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client("iam")
    role = iam.get_role(RoleName="sagemaker_execution_role")["Role"]["Arn"]

sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker session region: {sess.boto_region_name}")
```

## 2. Retrieve the latest Hugging Face TGI Neuron DLC

The latest Hugging Face TGI Neuron DLCs can be used to run inference on AWS Inferentia2. You can use the `ecr.image_uri` function from Optimum Neuron to retrieve the appropriate Hugging Face TGI Neuron DLC URI based on your desired `region` and `version`. Default values can be deduced by your AWS credentials.

```python
from optimum.neuron.utils import ecr

llm_image = ecr.image_uri("tgi")
# print image uri
print(f"llm image uri: {llm_image}")
```

## 3. Deploy Mixtral 8x7B to Inferentia2

At the time of writing, [AWS Inferentia2 does not support dynamic shapes for inference](https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/arch/neuron-features/dynamic-shapes.html#neuron-dynamic-shapes), which means that we need to specify our sequence length and batch size ahead of time.
To make it easier for customers to utilize the full power of Inferentia2, we created a [neuron model cache](https://huggingface.co/docs/optimum-neuron/guides/cache_system), which contains pre-compiled configurations for the most popular LLMs, including Mixtral 8x7B. 

This means we don't need to compile the model ourselves, but we can use the pre-compiled model from the cache. You can find compiled/cached configurations on the
 [Hugging Face Hub](https://huggingface.co/aws-neuron/optimum-neuron-cache/tree/main/inference-cache-config). If your desired configuration is not yet cached, you can compile it yourself using the [Optimum CLI](https://huggingface.co/docs/optimum-neuron/guides/export_model) or open a request at the [Cache repository](https://huggingface.co/aws-neuron/optimum-neuron-cache/discussions).

 Let's check the different configurations that are in the cache. For that you first need to log in the Hugging Face Hub, using a [User Access Token](https://huggingface.co/docs/hub/en/security-tokens) with read access.

Make sure you have the necessary permissions to access the model. You can request access to the model [here](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).

```python
from huggingface_hub import notebook_login

notebook_login()
```

Then, we need to install the latest version of Optimum Neuron.

```python
!pip install optimum-neuron --upgrade --quiet
```

Finally, we can query the cache and retrieve the existing set of configurations for which we maintained a compiled version of the model.

```python
HF_MODEL_ID = "mistralai/Mixtral-8x7B-Instruct-v0.1"

!optimum-cli neuron cache lookup $HF_MODEL_ID
```

You should retrieve two entries in the cache:
```code
*** 2 entrie(s) found in cache for mistralai/Mixtral-8x7B-Instruct-v0.1 for inference.***

auto_cast_type: bf16
batch_size: 1
checkpoint_id: mistralai/Mixtral-8x7B-Instruct-v0.1
checkpoint_revision: 41bd4c9e7e4fb318ca40e721131d4933966c2cc1
compiler_type: neuronx-cc
compiler_version: 2.16.372.0+4a9b2326
num_cores: 24
sequence_length: 4096
task: text-generation

auto_cast_type: bf16
batch_size: 4
checkpoint_id: mistralai/Mixtral-8x7B-Instruct-v0.1
checkpoint_revision: 41bd4c9e7e4fb318ca40e721131d4933966c2cc1
compiler_type: neuronx-cc
compiler_version: 2.16.372.0+4a9b2326
num_cores: 24
sequence_length: 4096
task: text-generation
```

**Deploying Mixtral 8x7B to a SageMaker Endpoint**  

Before deploying the model to Amazon SageMaker, we must define the TGI Neuron endpoint configuration. We need to make sure the following additional parameters are defined: 

- `HF_NUM_CORES`: Number of Neuron Cores used for the compilation.
- `HF_BATCH_SIZE`: The batch size that was used to compile the model.
- `HF_SEQUENCE_LENGTH`: The sequence length that was used to compile the model.
- `HF_AUTO_CAST_TYPE`: The auto cast type that was used to compile the model.

We still need to define traditional TGI parameters with:

- `HF_MODEL_ID`: The Hugging Face model ID.
- `HF_TOKEN`: The Hugging Face API token to access gated models.
- `MAX_BATCH_SIZE`: The maximum batch size that the model can handle, equal to the batch size used for compilation.
- `MAX_INPUT_TOKEN`: The maximum input length that the model can handle. 
- `MAX_TOTAL_TOKENS`: The maximum total tokens the model can generate, equal to the sequence length used for compilation.

Optionnaly, you can configure the endpoint to support chat templates:
- `MESSAGES_API_ENABLED`: Enable Messages API 

**Select the right instance type**

Mixtral 8x7B is a large model and requires a lot of memory. We are going to use the `inf2.48xlarge` instance type, which has 192 vCPUs and 384 GB of accelerator memory. The `inf2.48xlarge` instance comes with 12 Inferentia2 accelerators that include 24 Neuron Cores. In our case we will use a batch size of 4 and a sequence length of 4096. 

After that we can create our endpoint configuration and deploy the model to Amazon SageMaker. We will deploy the endpoint with the Messages API enabled, so that it is fully compatible with the OpenAI Chat Completion API.

```python
from sagemaker.huggingface import HuggingFaceModel

# sagemaker config
instance_type = "ml.inf2.48xlarge"
health_check_timeout = 2400  # additional time to load the model
volume_size = 512  # size in GB of the EBS volume

# Define Model and Endpoint configuration parameter
config = {
    "HF_MODEL_ID": "mistralai/Mixtral-8x7B-Instruct-v0.1",
    "HF_NUM_CORES": "24",  # number of neuron cores
    "HF_AUTO_CAST_TYPE": "bf16",  # dtype of the model
    "MAX_BATCH_SIZE": "4",  # max batch size for the model
    "MAX_INPUT_TOKENS": "4000",  # max length of input text
    "MAX_TOTAL_TOKENS": "4096",  # max length of generated text
    "MESSAGES_API_ENABLED": "true",  # Enable the messages API
    "HF_TOKEN": "<REPLACE WITH YOUR TOKEN>",
}

assert config["HF_TOKEN"] != "<REPLACE WITH YOUR TOKEN>", (
    "Please replace '<REPLACE WITH YOUR TOKEN>' with your Hugging Face Hub API token"
)


# create HuggingFaceModel with the image uri
llm_model = HuggingFaceModel(role=role, image_uri=llm_image, env=config)
```

After we have created the `HuggingFaceModel` we can deploy it to Amazon SageMaker using the `deploy` method. We will deploy the model with the `ml.inf2.48xlarge` instance type. TGI will automatically distribute and shard the model across all Inferentia devices.

```python
# Deploy model to an endpoint
# https://sagemaker.readthedocs.io/en/stable/api/inference/model.html#sagemaker.model.Model.deploy
llm_model._is_compiled_model = True

llm = llm_model.deploy(
    initial_instance_count=1,
    instance_type=instance_type,
    container_startup_health_check_timeout=health_check_timeout,
    volume_size=volume_size,
)
```

SageMaker will now create our endpoint and deploy the model to it. It takes around 15 minutes for deployment.

After our endpoint is deployed we can run inference on it. We will use the `predict` method from the `predictor` to run inference on our endpoint. 

The endpoint supports the Messages API, which is fully compatible with the OpenAI Chat Completion API. The Messages API allows us to interact with the model in a conversational way. We can define the role of the message and the content. The role can be either `system`,`assistant` or `user`. The `system` role is used to provide context to the model and the `user` role is used to ask questions or provide input to the model.

Parameters can be defined as in the `parameters` attribute of the payload. Check out the chat completion [documentation](https://platform.openai.com/docs/api-reference/chat/create) to find supported parameters.

```json
{
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "What is deep learning?" }
  ]
}
```

```python
# Prompt to generate
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is deep learning in one sentence?"},
]

# Generation arguments https://platform.openai.com/docs/api-reference/chat/create
parameters = {
    "max_tokens": 100,
}
```

Okay lets test it.

```python
chat = llm.predict({"messages": messages, **parameters, "steam": True})

print(chat["choices"][0]["message"]["content"].strip())
```

## 4. Clean up

To clean up, we can delete the model and endpoint.

```python
llm.delete_model()
llm.delete_endpoint()
```
