# Optimum.Habana

## Docs

- [Optimum for Intel® Gaudi® AI Accelerator](https://huggingface.co/docs/optimum.habana/v1.19.0/index.md)
- [Quickstart](https://huggingface.co/docs/optimum.habana/v1.19.0/quickstart.md)
- [Installation](https://huggingface.co/docs/optimum.habana/v1.19.0/installation.md)
- [DistributedRunner[[optimum.habana.distributed.DistributedRunner]]](https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/distributed_runner.md)
- [GaudiTrainer](https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/trainer.md)
- [GaudiConfig](https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/gaudi_config.md)
- [Accelerating Training](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/accelerate_training.md)
- [Overview](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/overview.md)
- [Comparing HPU-Optimized `safe_softmax` with Native PyTorch `safe_softmax`](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/safe_softmax.md)
- [Adapt a Transformers/Diffusers script to Intel Gaudi](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/script_adaptation.md)
- [Accelerating Inference](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/accelerate_inference.md)
- [Quantization](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/quantization.md)
- [Pretraining Transformers with Optimum for Intel Gaudi](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/pretraining.md)
- [Multi-node Training](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/multi_node_training.md)
- [DeepSpeed for HPUs](https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/deepspeed.md)
- [TGI on Gaudi](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/tgi.md)
- [Single-HPU Training](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/single_hpu.md)
- [Run Inference](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/inference.md)
- [Stable Diffusion](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/stable_diffusion.md)
- [Overview](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/overview.md)
- [Distributed training with Optimum for Intel Gaudi](https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/distributed.md)

### Optimum for Intel® Gaudi® AI Accelerator
https://huggingface.co/docs/optimum.habana/v1.19.0/index.md

# Optimum for Intel® Gaudi® AI Accelerator

Optimum for Intel Gaudi AI accelerator is the interface between Hugging Face libraries (Transformers, Diffusers, Accelerate,...) and [Intel Gaudi AI Accelerators (HPUs)](https://docs.habana.ai/en/latest/index.html).
It provides a set of tools that enable easy model loading, training and inference on single- and multi-HPU settings for various downstream tasks as shown in the table below.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/overview"
      ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
      <p class="text-gray-700">Learn the basics and become familiar with training transformers on HPUs with 🤗 Optimum. Start here if you are using 🤗 Optimum for Intel Gaudi for the first time!</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/overview"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum for Intel Gaudi to solve real-world problems.</p>
    </a>
  </div>
</div>

The Intel Gaudi AI accelerator family currently includes three product generations:
[Intel Gaudi 1](https://habana.ai/products/gaudi/),
[Intel Gaudi 2](https://habana.ai/products/gaudi2/), and
[Intel Gaudi 3](https://habana.ai/products/gaudi3/).
Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3,
96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the
[Gaudi Architecture Overview](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html).
Optimum for Intel Gaudi library is fully compatible with all three generations of Gaudi accelerators.

For in-depth examples of running workloads on Gaudi, explore the following blog posts:
- [Benchmarking Intel Gaudi 2 with NVIDIA A100 GPUs](https://huggingface.co/blog/habana-gaudi-2-benchmark)
- [Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2](https://huggingface.co/blog/bridgetower)

The following model architectures, tasks and device distributions have been validated for Optimum for Intel Gaudi:

<Tip>

In the tables below, ✅ means single-card, multi-card and DeepSpeed have all been validated.

</Tip>

- Transformers:

| Architecture                                          | Training                      | Inference                                       | Tasks                                                                                                                                                                                                                                                           |
|:------------------------------------------------------|:-----------------------------:|:-----------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| BERT                                                  | ✅                            | ✅                                              | <ul><li>[text classification](/examples/text-classification)</li><li>[question answering](/examples/question-answering)</li><li>[language modeling](/examples/language-modeling)</li><li>[text feature extraction](/examples/text-feature-extraction)</li></ul> |
| RoBERTa                                               | ✅                            | ✅                                              | <ul><li>[question answering](/examples/question-answering)</li><li>[language modeling](/examples/language-modeling)</li></ul>                                                                                                                                   |
| ALBERT                                                | ✅                            | ✅                                              | <ul><li>[question answering](/examples/question-answering)</li><li>[language modeling](/examples/language-modeling)</li></ul>                                                                                                                                   |
| DistilBERT                                            | ✅                            | ✅                                              | <ul><li>[question answering](/examples/question-answering)</li><li>[language modeling](/examples/language-modeling)</li></ul>                                                                                                                                   |
| GPT2                                                  | ✅                            | ✅                                              | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| BLOOM(Z)                                              |                               | <ul><li>DeepSpeed</li></ul>                     | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| StarCoder / StarCoder2                                | ✅                            | <ul><li>Single-card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| GPT-J                                                 | <ul><li>DeepSpeed</li></ul>   | <ul><li>Single card</li><li>DeepSpeed</li></ul> | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| GPT-Neo                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| GPT-NeoX                                              | <ul><li>DeepSpeed</li></ul>   | <ul><li>DeepSpeed</li></ul>                     | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| OPT                                                   |                               | <ul><li>DeepSpeed</li></ul>                     | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Llama 2 / CodeLlama / Llama 3 / Llama Guard / Granite | ✅                            | ✅                                              | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li><li>[question answering](/examples/question-answering)</li><li>[text classification](/examples/text-classification) (Llama Guard)</li></ul>   |
| StableLM                                              |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Falcon                                                | <ul><li>LoRA</li></ul>        | ✅                                              | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| CodeGen                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| MPT                                                   |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Mistral                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Phi                                                   | ✅                            | <ul><li>Single card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| Mixtral                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Persimmon                                             |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Qwen2 / Qwen3                                         | <ul><li>Single card</li></ul> | <ul><li>Single card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| Qwen2-MoE                                             |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Gemma                                                 | ✅                            | <ul><li>Single card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| Gemma2                                                |                               | ✅                                              | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Gemma3                                                |                               | ✅                                              | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| XGLM                                                  |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Cohere                                                |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| T5 / Flan T5                                          | ✅                            | ✅                                              | <ul><li>[summarization](/examples/summarization)</li><li>[translation](/examples/translation)</li><li>[question answering](/examples/question-answering#fine-tuning-t5-on-squad20)</li></ul>                                                                    |
| BART                                                  |                               | <ul><li>Single card</li></ul>                   | <ul><li>[summarization](/examples/summarization)</li><li>[translation](/examples/translation)</li><li>[question answering](/examples/question-answering#fine-tuning-t5-on-squad20)</li></ul>                                                                    |
| ViT                                                   | ✅                            | ✅                                              | <ul><li>[image classification](/examples/image-classification)</li></ul>                                                                                                                                                                                        |
| Swin                                                  | ✅                            | ✅                                              | <ul><li>[image classification](/examples/image-classification)</li></ul>                                                                                                                                                                                        |
| Wav2Vec2                                              | ✅                            | ✅                                              | <ul><li>[audio classification](/examples/audio-classification)</li><li>[speech recognition](/examples/speech-recognition)</li></ul>                                                                                                                             |
| Whisper                                               | ✅                            | ✅                                              | <ul><li>[speech recognition](/examples/speech-recognition)</li></ul>                                                                                                                                                                                            |
| SpeechT5                                              |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text to speech](/examples/text-to-speech)</li></ul>                                                                                                                                                                                                    |
| CLIP                                                  | ✅                            | ✅                                              | <ul><li>[contrastive image-text training](/examples/contrastive-image-text)</li></ul>                                                                                                                                                                           |
| BridgeTower                                           | ✅                            | ✅                                              | <ul><li>[contrastive image-text training](/examples/contrastive-image-text)</li></ul>                                                                                                                                                                           |
| ESMFold                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[protein folding](/examples/protein-folding)</li></ul>                                                                                                                                                                                                  |
| Blip                                                  |                               | <ul><li>Single card</li></ul>                   | <ul><li>[visual question answering](/examples/visual-question-answering)</li><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                             |
| OWLViT                                                |                               | <ul><li>Single card</li></ul>                   | <ul><li>[zero shot object detection](/examples/zero-shot-object-detection)</li></ul>                                                                                                                                                                            |
| ClipSeg                                               |                               | <ul><li>Single card</li></ul>                   | <ul><li>[object segmentation](/examples/object-segementation)</li></ul>                                                                                                                                                                                         |
| Llava / Llava-next / Llava-onevision                  |                               | <ul><li>Single card</li></ul>                   | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| idefics2                                              | <ul><li>LoRA</li></ul>        | <ul><li>Single card</li></ul>                   | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| Paligemma                                             |                               | <ul><li>Single card</li></ul>                   | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| Segment Anything Model                                |                               | <ul><li>Single card</li></ul>                   | <ul><li>[object segmentation](/examples/object-segementation)</li></ul>                                                                                                                                                                                         |
| VideoMAE                                              |                               | <ul><li>Single card</li></ul>                   | <ul><li>[Video classification](/examples/video-classification)</li></ul>                                                                                                                                                                                        |
| TableTransformer                                      |                               | <ul><li>Single card</li></ul>                   | <ul><li>[table object detection](/examples/table-detection) </li></ul>                                                                                                                                                                                          |
| DETR                                                  |                               | <ul><li>Single card</li></ul>                   | <ul><li>[object detection](/examples/object-detection)</li></ul>                                                                                                                                                                                                |
| Mllama                                                | <ul><li>LoRA</li></ul>        | ✅                                              | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| MiniCPM3                                              |                               | <ul><li>Single card</li></ul>                   | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| Baichuan2                                             | <ul><li>DeepSpeed</li></ul>   | <ul><li>Single card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| DeepSeek-V2                                           | ✅                            | ✅                                              | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| DeepSeek-V3 / Moonlight                               |                               | ✅                                              | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| ChatGLM                                               | <ul><li>DeepSpeed</li></ul>   | <ul><li>Single card</li></ul>                   | <ul><li>[language modeling](/examples/language-modeling)</li><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                         |
| Qwen2-VL                                              |                               |  <ul><li>Single card</li></ul>                  | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| Qwen2.5-VL                                            |                               |  <ul><li>Single card</li></ul>                  | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| VideoLLaVA                                            |                               | <ul><li>Single card</li></ul>                   | <ul><li>[Video comprehension](/examples/video-comprehension)</li></ul>                                                                                                                                                                                          |
| GLM-4V                                                |                               |  <ul><li>Single card</li></ul>                  | <ul><li>[image to text](/examples/image-to-text)</li></ul>                                                                                                                                                                                                      |
| Arctic                                                |                               |  <ul><li>DeepSpeed</li></ul>                    | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |
| GPT-OSS                                               |                               |  <ul><li>DeepSpeed</li></ul>                    | <ul><li>[text generation](/examples/text-generation)</li></ul>                                                                                                                                                                                                  |

- Diffusers

| Architecture               | Training.              | Inference                     | Tasks                                                                                                                                                                                         |
|----------------------------|:----------------------:|:-----------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Stable Diffusion           | ✅                  | ✅                         | <ul><li>[text-to-image generation](/examples/stable-diffusion)</li></ul>                                                                                                                      |
| Stable Diffusion XL        | ✅                  | ✅                         | <ul><li>[text-to-image generation](/examples/stable-diffusion)</li></ul>                                                                                                                      |
| Stable Diffusion Depth2img |                        | <ul><li>Single card</li></ul> | <ul><li>[depth-to-image generation](/examples/stable-diffusion)</li></ul>                                                                                                                     |
| Stable Diffusion 3         | ✅                  | ✅                         | <ul><li>[text-to-image generation](/examples/stable-diffusion#stable-diffusion-3-and-35-sd3)</li></ul>                                                                                        |
| LDM3D                      |                        | <ul><li>Single card</li></ul> | <ul><li>[text-to-image generation](/examples/stable-diffusion)</li></ul>                                                                                                                      |
| FLUX.1                     | <ul><li>LoRA</li></ul> | <ul><li>Single card</li></ul> | <ul><li>[text-to-image generation](/examples/stable-diffusion)</li></ul>                                                                                                                      |
| Text to Video              |                        | <ul><li>Single card</li></ul> | <ul><li>[text-to-video generation](/examples/stable-diffusion#text-to-video-generation)</li></ul>                                                                                             |
| Image to Video             |                        | <ul><li>Single card</li></ul> | <ul><li>[image-to-video generation](/examples/stable-diffusion#image-to-video-generation)</li></ul>                                                                                           |
| i2vgen-xl                  |                        | <ul><li>Single card</li></ul> | <ul><li>[image-to-video generation](/examples/stable-diffusion#I2vgen-xl)</li></ul>                                                                                                           |
| Wan                        |                        | ✅                         | <ul><li>[text-to-video generation](/examples/stable-diffusion#text-to-video-with-wan-22)</li><li>[image-to-video generation](/examples/stable-diffusion#image-to-video-with-wan-22)</li></ul> |

- PyTorch Image Models/TIMM:

| Architecture        | Training | Inference                     | Tasks                                                                     |
|---------------------|:--------:|:-----------------------------:|:--------------------------------------------------------------------------|
| FastViT             |          | <ul><li>Single card</li></ul> |  <ul><li>[image classification](/examples/image-classification)</li></ul> |

- TRL:

| Architecture     | Training | Inference | Tasks                                            |
|------------------|:--------:|:---------:|:-------------------------------------------------|
| Llama 2          | ✅       |           | <ul><li>[DPO Pipeline](/examples/trl)</li></ul>  |
| Llama 2          | ✅       |           | <ul><li>[PPO Pipeline](/examples/trl)</li></ul>  |
| Stable Diffusion | ✅       |           | <ul><li>[DDPO Pipeline](/examples/trl)</li></ul> |


Other models and tasks supported by the 🤗 Transformers and 🤗 Diffusers library may also work.
You can refer to this [section](https://github.com/huggingface/optimum-habana#how-to-use-it) for using them with 🤗 Optimum for Intel Gaudi.
In addition, [this page](/examples) explains how to modify any [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch) from the 🤗 Transformers library to make it work with 🤗 Optimum for Intel Gaudi.


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/index.mdx" />

### Quickstart
https://huggingface.co/docs/optimum.habana/v1.19.0/quickstart.md

# Quickstart

Running your AI workloads on Intel® Gaudi® accelerators can be accomplished in just a few simple steps.
In this quick guide, we show how to run inference with GPT-2 model on Intel Gaudi 2 accelerators using the 🤗 Optimum for Intel Gaudi library.

Optimum for Intel Gaudi library is optimized for running various AI workloads on Intel Gaudi accelerators and it contains fully documented
inference, training and fine-tuning examples. Please refer to the [Optimum for Intel Gaudi GitHub](https://github.com/huggingface/optimum-habana)
page for more information.

## Accessing Intel Gaudi AI Accelerator
To access an Intel Gaudi AI accelerator node in the Intel® Tiber™ AI Cloud, you will go to
[Intel Tiber AI Cloud](https://console.cloud.intel.com/hardware) and access the hardware instances to select the Intel Gaudi AI accelerator
platform for deep learning and follow the steps to start and connect to the node.

## Docker Setup

Now that you have access to the node, you will use the latest Intel Gaudi AI Accelerator docker image by executing the docker run command which will
automatically download and run the docker. At the time of writing this guide, latest Gaudi docker version was 1.22.0:

```bash
release=1.22.0
os=ubuntu22.04
torch=2.7.1
docker_image=vault.habana.ai/gaudi-docker/$release/$os/habanalabs/pytorch-installer-$torch:latest
```
<Tip>

Visit <a href="https://docs.habana.ai/en/latest/Release_Notes/GAUDI_Release_Notes.html">Intel Gaudi AI Accelerator Release Notes</a>
page to get the latest Intel Gaudi AI accelerator software release version. Alternatively, check
<a href="https://vault.habana.ai/ui/native/gaudi-docker">https://vault.habana.ai/ui/native/gaudi-docker</a>
for the list of all released Intel® Gaudi® AI accelerator docker images.

</Tip>

Execute docker run command:
```bash
docker run -itd \
    --name Gaudi_Docker \
    --runtime=habana \
    -e HABANA_VISIBLE_DEVICES=all \
    -e OMPI_MCA_btl_vader_single_copy_mechanism=none \
    --cap-add=sys_nice \
    --net=host \
    --ipc=host \
    ${docker_image}
```

## Optimum for Intel Gaudi Setup

Check latest release of Optimum for Intel Gaudi [here](https://github.com/huggingface/optimum-habana/releases).
At the time of writing this guide, latest Optimum for Intel Gaudi release version was v1.19.1, which is paired with Intel Gaudi Software release
version 1.22.0.  Install Optimum for Intel Gaudi as follows:

```bash
git clone -b v1.19.1 https://github.com/huggingface/optimum-habana
pip install ./optimum-habana
```

All available examples are under [optimum-habana/examples](/examples).

Here is [text-generation](/examples/text-generation) example,
to run Llama-2 7B text generation example on Gaudi, complete the prerequisite setup:
```bash
cd ~/optimum-habana/examples/text-generation
pip install -r requirements.txt
```

To be able to run gated models like [Llama-2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf), you should:
- Have a 🤗 account
- Agree to the terms of use of the model in its [model card](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- Set your token as explained [here](https://huggingface.co/docs/hub/security-tokens)
- Login to your account using the HF CLI: run `huggingface-cli login` before launching your script

## Single Device Inference

Run single Gaudi device (HPU) inference with Llama-2 7B model:
```bash
PT_HPU_LAZY_MODE=1 python run_generation.py \
    --model_name_or_path meta-llama/Llama-2-7b-hf \
    --use_hpu_graphs \
    --use_kv_cache \
    --max_new_tokens 100 \
    --do_sample \
    --prompt "Here is my prompt"
```

<Tip>

The list of all possible arguments can be obtained running the script with --help

</Tip>

## Multi-Device Inference

With a multi-device Gaudi system, such as one with 8 HPUs, you can perform distributed inference using libraries like
Microsoft® DeepSpeed. Gaudi-specific fork of the library is maintained by Intel at
[https://github.com/HabanaAI/DeepSpeed](https://github.com/HabanaAI/DeepSpeed).

To install the library compatible with the same Gaudi software release stack, use:
```bash
pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.22.0
```

With DeepSpeed successfully installed we can now run a distributed GPT-2 inference on an 8 HPU system as follows:
```bash
number_of_devices=8 \
PT_HPU_LAZY_MODE=1 python ../gaudi_spawn.py --use_deepspeed --world_size ${number_of_devices} \
run_generation.py \
    --model_name_or_path meta-llama/Llama-2-7b-hf \
    --use_hpu_graphs \
    --use_kv_cache \
    --max_new_tokens=100 \
    --do_sample \
    --prompt="Here is my prompt"
```

## Training on Gaudi

🤗 Optimum for Intel Gaudi contains a number of examples demonstrating single and multi Gaudi device training/fine-tuning.

For example, a number of language models can be trained with the scripts provided
[language modeling examples section](/examples/language-modeling).

As an illustration, let us run GPT-2 single and multi card training examples on Gaudi.

Install prerequisites with:
```bash
cd ~/optimum-habana/examples/language-modeling
pip install -r requirements.txt
```

To train GPT-2 model on a single card, use:
```bash
PT_HPU_LAZY_MODE=1 python run_clm.py \
    --model_name_or_path gpt2 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-clm \
    --gaudi_config_name Habana/gpt2 \
    --use_habana \
    --use_lazy_mode \
    --use_hpu_graphs_for_inference \
    --throughput_warmup_steps 3
```

To train GPT-2 model using multi-card Gaudi system:
```bash
number_of_devices=8 \
PT_HPU_LAZY_MODE=1 python ../gaudi_spawn.py --use_deepspeed --world_size ${number_of_devices} \
run_clm.py \
    --model_name_or_path gpt2 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-clm \
    --gaudi_config_name Habana/gpt2 \
    --use_habana \
    --use_lazy_mode \
    --use_hpu_graphs_for_inference \
    --gradient_checkpointing \
    --use_cache False \
    --throughput_warmup_steps 3
```

## Diffusion Workloads

🤗 Optimum for Intel Gaudi also features HPU-optimized support for the 🤗 Diffusers library.
Thus, you can deploy Stable Diffusion and similar diffusion models on Gaudi and enable
text-to-image generation and other diffusion-based workloads.

Before running Stable Diffusion inference example on Gaudi, complete the prerequisite setup:
```bash
cd ~/optimum-habana/examples/stable-diffusion
pip install -r requirements.txt
```

Here is an example of running Stable Diffusion text to image inference on Gaudi:
```bash
PT_HPU_LAZY_MODE=1 python text_to_image_generation.py \
    --model_name_or_path CompVis/stable-diffusion-v1-4 \
    --prompts "An image of a squirrel in Picasso style" \
    --num_images_per_prompt 10 \
    --batch_size 1 \
    --image_save_dir /tmp/stable_diffusion_images \
    --use_habana \
    --use_hpu_graphs \
    --gaudi_config Habana/stable-diffusion \
    --bf16
```

Also, here is an example of modifying a basic 🤗 Diffusers Stable Diffusion pipeline call to work with Gaudi
using the Optimum for Intel Gaudi library:
```diff
- from diffusers import DDIMScheduler, StableDiffusionPipeline
+ from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline

model_name = "CompVis/stable-diffusion-v1-4"

- scheduler = DDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
+ scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

- pipeline = StableDiffusionPipeline.from_pretrained(
+ pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
+   use_habana=True,
+   use_hpu_graphs=True,
+   gaudi_config="Habana/stable-diffusion",
)

outputs = pipeline(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=16,
+   batch_size=4,
)
```

In addition, sample scripts for fine-tuning diffusion models are given in
[Stable Diffusion training section](/examples/stable-diffusion/training).

A more comprehensive list of examples in Optimum for Intel Gaudi is given next.

## Ready-to-Use Examples

Now that you have run a full inference case, you can go back to the
[Optimum for Intel Gaudi validated models](https://github.com/huggingface/optimum-habana?tab=readme-ov-file#validated-models)
to see more options for running inference.

Here are examples for various modalities and tasks that can be used out of the box:

- **Text**
  - [language modeling](/examples/language-modeling)
  - [multi node training](/examples/multi-node-training)
  - [protein folding](/examples/protein-folding)
  - [question answering](/examples/question-answering)
  - [sentence transformers training](/examples/sentence-transformers-training)
  - [summarization](/examples/summarization)
  - [table detection](/examples/table-detection)
  - [text classification](/examples/text-classification)
  - [text feature extraction](/examples/text-feature-extraction)
  - [text generation](/examples/text-generation)
  - [translation](/examples/translation)
  - [trl](/examples/trl)

- **Audio**
  - [audio classification](/examples/audio-classification)
  - [speech recognition](/examples/speech-recognition)
  - [text to speech](/examples/text-to-speech)

- **Images**
  - [object detection](/examples/object-detection)
  - [object segementation](/examples/object-segementation)
  - [image classification](/examples/image-classification)
  - [image to text](/examples/image-to-text)
  - [contrastive image text](/examples/contrastive-image-text)
  - [stable diffusion](/examples/stable-diffusion)
  - [visual question answering](/examples/visual-question-answering)
  - [zero-shot object detection](/examples/zero-shot-object-detection)

- **Video**
  - [stable-video-diffusion](/examples/stable-diffusion)
  - [video-classification](/examples/video-classification)

To learn more about how to adapt 🤗 Transformers or Diffusers scripts for Intel Gaudi, check out
[Script Adaptation](https://huggingface.co/docs/optimum/habana/usage_guides/script_adaptation) guide.


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/quickstart.mdx" />

### Installation
https://huggingface.co/docs/optimum.habana/v1.19.0/installation.md

# Installation

To install Optimum for Intel® Gaudi® AI accelerator, you first need to install Intel Gaudi Software and the Intel Gaudi
AI accelerator drivers by following the official [installation guide](https://docs.habana.ai/en/latest/Installation_Guide/index.html).
Then, Optimum for Intel Gaudi can be installed using `pip` as follows:

```bash
python -m pip install --upgrade-strategy eager optimum[habana]
```


To use Microsoft® DeepSpeed with Intel Gaudi devices, you also need to run the following command:

```bash
python -m pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.22.0
```

To ensure that you are installing the correct Intel Gaudi Software, please run the `hl-smi` command to confirm the software version
being used in the system and apply the same version when running the DeepSpeed installation; please review the Intel Gaudi
[Support Matrix](https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix.html) and ensure that you are using an appropriate
version of DeepSpeed.


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/installation.mdx" />

### DistributedRunner[[optimum.habana.distributed.DistributedRunner]]
https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/distributed_runner.md

# DistributedRunner[[optimum.habana.distributed.DistributedRunner]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.distributed.DistributedRunner</name><anchor>optimum.habana.distributed.DistributedRunner</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L32</source><parameters>[{"name": "command_list", "val": ": typing.List = []"}, {"name": "world_size", "val": ": int = 1"}, {"name": "hostfile", "val": ": typing.Union[str, pathlib.Path] = None"}, {"name": "use_mpi", "val": ": bool = False"}, {"name": "use_deepspeed", "val": ": bool = False"}, {"name": "master_port", "val": ": int = 29500"}, {"name": "use_env", "val": ": bool = False"}, {"name": "map_by", "val": ": bool = 'socket'"}, {"name": "multi_hls", "val": " = None"}]</parameters></docstring>

Set up training/inference hardware configurations and run distributed commands.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_multi_node_setup</name><anchor>optimum.habana.distributed.DistributedRunner.create_multi_node_setup</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L188</source><parameters>[]</parameters></docstring>

Multi-node configuration setup for DeepSpeed.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_single_card_setup</name><anchor>optimum.habana.distributed.DistributedRunner.create_single_card_setup</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L147</source><parameters>[{"name": "use_deepspeed", "val": " = False"}]</parameters></docstring>

Single-card setup.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_single_node_setup</name><anchor>optimum.habana.distributed.DistributedRunner.create_single_node_setup</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L177</source><parameters>[]</parameters></docstring>

Single-node multi-card configuration setup.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_single_node_setup_deepspeed</name><anchor>optimum.habana.distributed.DistributedRunner.create_single_node_setup_deepspeed</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L168</source><parameters>[]</parameters></docstring>

Single-node multi-card configuration setup for DeepSpeed.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_single_node_setup_mpirun</name><anchor>optimum.habana.distributed.DistributedRunner.create_single_node_setup_mpirun</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L157</source><parameters>[]</parameters></docstring>

Single-node multi-card configuration setup for mpirun.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>process_hostfile</name><anchor>optimum.habana.distributed.DistributedRunner.process_hostfile</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L230</source><parameters>[]</parameters><rettype>str</rettype><retdesc>address of the master node.</retdesc></docstring>

Returns the master address to use for multi-node runs with DeepSpeed.
Directly inspired from https://github.com/microsoft/DeepSpeed/blob/316c4a43e0802a979951ee17f735daf77ea9780f/deepspeed/autotuning/utils.py#L145.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>run</name><anchor>optimum.habana.distributed.DistributedRunner.run</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/distributed/distributed_runner.py#L196</source><parameters>[]</parameters></docstring>

Runs the desired command with configuration specified by the user.


</div></div>

<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/package_reference/distributed_runner.mdx" />

### GaudiTrainer
https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/trainer.md

# GaudiTrainer

The [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainer) class provides an extended API for the feature-complete [Transformers Trainer](https://huggingface.co/docs/transformers/main_classes/trainer). It is used in all the [example scripts](/examples).

Before instantiating your [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainer), create a [GaudiTrainingArguments](/docs/optimum.habana/v1.19.0/en/package_reference/trainer#optimum.habana.GaudiTrainingArguments) object to access all the points of customization during training.

<Tip warning={true}>

The [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainer) class is optimized for 🤗 Transformers models running on Intel Gaudi.

</Tip>

Here is an example of how to customize [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainer) to use a weighted loss (useful when you have an unbalanced training set):

```python
from torch import nn
from optimum.habana import GaudiTrainer


class CustomGaudiTrainer(GaudiTrainer):
    def compute_loss(self, model, inputs, return_outputs=False):
        labels = inputs.get("labels")
        # forward pass
        outputs = model(**inputs)
        logits = outputs.get("logits")
        # compute custom loss (suppose one has 3 labels with different weights)
        loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0]))
        loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
        return (loss, outputs) if return_outputs else loss
```

Another way to customize the training loop behavior for the PyTorch [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainer) is to use [callbacks](https://huggingface.co/docs/transformers/main_classes/callback) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms...) and take decisions (like early stopping).

## GaudiTrainer[[optimum.habana.GaudiTrainer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.GaudiTrainer</name><anchor>optimum.habana.GaudiTrainer</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L214</source><parameters>[{"name": "model", "val": ": typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None"}, {"name": "gaudi_config", "val": ": GaudiConfig = None"}, {"name": "args", "val": ": TrainingArguments = None"}, {"name": "data_collator", "val": ": typing.Optional[transformers.data.data_collator.DataCollator] = None"}, {"name": "train_dataset", "val": ": typing.Union[torch.utils.data.dataset.Dataset, torch.utils.data.dataset.IterableDataset, ForwardRef('datasets.Dataset'), NoneType] = None"}, {"name": "eval_dataset", "val": ": typing.Union[torch.utils.data.dataset.Dataset, dict[str, torch.utils.data.dataset.Dataset], ForwardRef('datasets.Dataset'), NoneType] = None"}, {"name": "processing_class", "val": ": typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None"}, {"name": "model_init", "val": ": typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None"}, {"name": "compute_loss_func", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "compute_metrics", "val": ": typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], dict]] = None"}, {"name": "callbacks", "val": ": typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None"}, {"name": "optimizers", "val": ": tuple = (None, None)"}, {"name": "optimizer_cls_and_kwargs", "val": ": typing.Optional[tuple[type[torch.optim.optimizer.Optimizer], dict[str, typing.Any]]] = None"}, {"name": "preprocess_logits_for_metrics", "val": ": typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None"}]</parameters></docstring>

GaudiTrainer is built on top of the tranformers' Trainer to enable
deployment on Habana's Gaudi.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>autocast_smart_context_manager</name><anchor>optimum.habana.GaudiTrainer.autocast_smart_context_manager</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L1686</source><parameters>[{"name": "cache_enabled", "val": ": typing.Optional[bool] = True"}]</parameters></docstring>

A helper wrapper that creates an appropriate context manager for `autocast` while feeding it the desired
arguments, depending on the situation.
Modified by Habana to enable using `autocast` on Gaudi devices.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>evaluate</name><anchor>optimum.habana.GaudiTrainer.evaluate</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L1883</source><parameters>[{"name": "eval_dataset", "val": ": typing.Union[torch.utils.data.dataset.Dataset, dict[str, torch.utils.data.dataset.Dataset], NoneType] = None"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}, {"name": "metric_key_prefix", "val": ": str = 'eval'"}]</parameters></docstring>

From https://github.com/huggingface/transformers/blob/v4.38.2/src/transformers/trainer.py#L3162 with the following modification
1. use throughput_warmup_steps in evaluation throughput calculation


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>evaluation_loop</name><anchor>optimum.habana.GaudiTrainer.evaluation_loop</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L2004</source><parameters>[{"name": "dataloader", "val": ": DataLoader"}, {"name": "description", "val": ": str"}, {"name": "prediction_loss_only", "val": ": typing.Optional[bool] = None"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}, {"name": "metric_key_prefix", "val": ": str = 'eval'"}]</parameters></docstring>

Prediction/evaluation loop, shared by `Trainer.evaluate()` and `Trainer.predict()`.
Works both with or without labels.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>predict</name><anchor>optimum.habana.GaudiTrainer.predict</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L1959</source><parameters>[{"name": "test_dataset", "val": ": Dataset"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}, {"name": "metric_key_prefix", "val": ": str = 'test'"}]</parameters></docstring>

From https://github.com/huggingface/transformers/blob/v4.45.2/src/transformers/trainer.py#L3904 with the following modification
1. comment out TPU related
2. use throughput_warmup_steps in evaluation throughput calculation


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prediction_step</name><anchor>optimum.habana.GaudiTrainer.prediction_step</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L2271</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "inputs", "val": ": dict"}, {"name": "prediction_loss_only", "val": ": bool"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to evaluate.
- **inputs** (`dict[str, Union[torch.Tensor, Any]]`) --
  The inputs and targets of the model.
  The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
  argument `labels`. Check your model's documentation for all accepted arguments.
- **prediction_loss_only** (`bool`) --
  Whether or not to return the loss only.
- **ignore_keys** (`List[str]`, *optional*) --
  A list of keys in the output of your model (if it is a dictionary) that should be ignored when
  gathering predictions.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]</rettype><retdesc>A tuple with the loss,
logits and labels (each being optional).</retdesc></docstring>

Perform an evaluation step on `model` using `inputs`.
Subclass and override to inject custom behavior.







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_model</name><anchor>optimum.habana.GaudiTrainer.save_model</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L1786</source><parameters>[{"name": "output_dir", "val": ": typing.Optional[str] = None"}, {"name": "_internal_call", "val": ": bool = False"}]</parameters></docstring>

Will save the model, so you can reload it using `from_pretrained()`.
Will only save from the main process.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>train</name><anchor>optimum.habana.GaudiTrainer.train</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L517</source><parameters>[{"name": "resume_from_checkpoint", "val": ": typing.Union[str, bool, NoneType] = None"}, {"name": "trial", "val": ": typing.Union[ForwardRef('optuna.Trial'), dict[str, typing.Any], NoneType] = None"}, {"name": "ignore_keys_for_eval", "val": ": typing.Optional[list[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **resume_from_checkpoint** (`str` or `bool`, *optional*) --
  If a `str`, local path to a saved checkpoint as saved by a previous instance of `Trainer`. If a
  `bool` and equals `True`, load the last checkpoint in *args.output_dir* as saved by a previous instance
  of `Trainer`. If present, training will resume from the model/optimizer/scheduler states loaded here.
- **trial** (`optuna.Trial` or `dict[str, Any]`, *optional*) --
  The trial run or the hyperparameter dictionary for hyperparameter search.
- **ignore_keys_for_eval** (`List[str]`, *optional*) --
  A list of keys in the output of your model (if it is a dictionary) that should be ignored when
  gathering predictions for evaluation during the training.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments used to hide deprecated arguments</paramsdesc><paramgroups>0</paramgroups></docstring>

Main training entry point.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>training_step</name><anchor>optimum.habana.GaudiTrainer.training_step</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer.py#L1706</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "inputs", "val": ": dict"}, {"name": "num_items_in_batch", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to train.
- **inputs** (`dict[str, Union[torch.Tensor, Any]]`) --
  The inputs and targets of the model.

  The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
  argument `labels`. Check your model's documentation for all accepted arguments.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The tensor with training loss on this batch.</retdesc></docstring>

Perform a training step on a batch of inputs.

Subclass and override to inject custom behavior.








</div></div>

## GaudiSeq2SeqTrainer[[optimum.habana.GaudiSeq2SeqTrainer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.GaudiSeq2SeqTrainer</name><anchor>optimum.habana.GaudiSeq2SeqTrainer</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer_seq2seq.py#L56</source><parameters>[{"name": "model", "val": ": typing.Union[ForwardRef('PreTrainedModel'), torch.nn.modules.module.Module] = None"}, {"name": "gaudi_config", "val": ": GaudiConfig = None"}, {"name": "args", "val": ": GaudiTrainingArguments = None"}, {"name": "data_collator", "val": ": typing.Optional[ForwardRef('DataCollator')] = None"}, {"name": "train_dataset", "val": ": typing.Union[torch.utils.data.dataset.Dataset, ForwardRef('IterableDataset'), ForwardRef('datasets.Dataset'), NoneType] = None"}, {"name": "eval_dataset", "val": ": typing.Union[torch.utils.data.dataset.Dataset, dict[str, torch.utils.data.dataset.Dataset], NoneType] = None"}, {"name": "processing_class", "val": ": typing.Union[ForwardRef('PreTrainedTokenizerBase'), ForwardRef('BaseImageProcessor'), ForwardRef('FeatureExtractionMixin'), ForwardRef('ProcessorMixin'), NoneType] = None"}, {"name": "model_init", "val": ": typing.Optional[typing.Callable[[], ForwardRef('PreTrainedModel')]] = None"}, {"name": "compute_loss_func", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "compute_metrics", "val": ": typing.Optional[typing.Callable[[ForwardRef('EvalPrediction')], dict]] = None"}, {"name": "callbacks", "val": ": typing.Optional[list['TrainerCallback']] = None"}, {"name": "optimizers", "val": ": tuple = (None, None)"}, {"name": "preprocess_logits_for_metrics", "val": ": typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>evaluate</name><anchor>optimum.habana.GaudiSeq2SeqTrainer.evaluate</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer_seq2seq.py#L142</source><parameters>[{"name": "eval_dataset", "val": ": typing.Optional[torch.utils.data.dataset.Dataset] = None"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}, {"name": "metric_key_prefix", "val": ": str = 'eval'"}, {"name": "**gen_kwargs", "val": ""}]</parameters><paramsdesc>- **eval_dataset** (`Dataset`, *optional*) --
  Pass a dataset if you wish to override `self.eval_dataset`. If it is an [Dataset](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/main_classes#datasets.Dataset), columns
  not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__`
  method.
- **ignore_keys** (`List[str]`, *optional*) --
  A list of keys in the output of your model (if it is a dictionary) that should be ignored when
  gathering predictions.
- **metric_key_prefix** (`str`, *optional*, defaults to `"eval"`) --
  An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
  "eval_bleu" if the prefix is `"eval"` (default)
- **max_length** (`int`, *optional*) --
  The maximum target length to use when predicting with the generate method.
- **num_beams** (`int`, *optional*) --
  Number of beams for beam search that will be used when predicting with the generate method. 1 means no
  beam search.
- **gen_kwargs** --
  Additional `generate` specific kwargs.</paramsdesc><paramgroups>0</paramgroups><retdesc>A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
dictionary also contains the epoch number which comes from the training state.</retdesc></docstring>

Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
(pass it to the init `compute_metrics` argument).
You can also subclass and override this method to inject custom behavior.





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>predict</name><anchor>optimum.habana.GaudiSeq2SeqTrainer.predict</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/trainer_seq2seq.py#L194</source><parameters>[{"name": "test_dataset", "val": ": Dataset"}, {"name": "ignore_keys", "val": ": typing.Optional[list[str]] = None"}, {"name": "metric_key_prefix", "val": ": str = 'test'"}, {"name": "**gen_kwargs", "val": ""}]</parameters><paramsdesc>- **test_dataset** (`Dataset`) --
  Dataset to run the predictions on. If it is a [Dataset](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/main_classes#datasets.Dataset), columns not accepted by the
  `model.forward()` method are automatically removed. Has to implement the method `__len__`
- **ignore_keys** (`List[str]`, *optional*) --
  A list of keys in the output of your model (if it is a dictionary) that should be ignored when
  gathering predictions.
- **metric_key_prefix** (`str`, *optional*, defaults to `"eval"`) --
  An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
  "eval_bleu" if the prefix is `"eval"` (default)
- **max_length** (`int`, *optional*) --
  The maximum target length to use when predicting with the generate method.
- **num_beams** (`int`, *optional*) --
  Number of beams for beam search that will be used when predicting with the generate method. 1 means no
  beam search.
- **gen_kwargs** --
  Additional `generate` specific kwargs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in `evaluate()`.


<Tip>
If your predictions or labels have different sequence lengths (for instance because you're doing dynamic
padding in a token classification task) the predictions will be padded (on the right) to allow for
concatenation into one array. The padding index is -100.
</Tip>
Returns: *NamedTuple* A namedtuple with the following keys:
- predictions (`np.ndarray`): The predictions on `test_dataset`.
- label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
  labels).


</div></div>

## GaudiTrainingArguments[[optimum.habana.GaudiTrainingArguments]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.GaudiTrainingArguments</name><anchor>optimum.habana.GaudiTrainingArguments</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/training_args.py#L86</source><parameters>[{"name": "output_dir", "val": ": typing.Optional[str] = None"}, {"name": "overwrite_output_dir", "val": ": bool = False"}, {"name": "do_train", "val": ": bool = False"}, {"name": "do_eval", "val": ": bool = False"}, {"name": "do_predict", "val": ": bool = False"}, {"name": "eval_strategy", "val": ": typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'"}, {"name": "prediction_loss_only", "val": ": bool = False"}, {"name": "per_device_train_batch_size", "val": ": int = 8"}, {"name": "per_device_eval_batch_size", "val": ": int = 8"}, {"name": "per_gpu_train_batch_size", "val": ": typing.Optional[int] = None"}, {"name": "per_gpu_eval_batch_size", "val": ": typing.Optional[int] = None"}, {"name": "gradient_accumulation_steps", "val": ": int = 1"}, {"name": "eval_accumulation_steps", "val": ": typing.Optional[int] = None"}, {"name": "eval_delay", "val": ": typing.Optional[float] = 0"}, {"name": "torch_empty_cache_steps", "val": ": typing.Optional[int] = None"}, {"name": "learning_rate", "val": ": float = 5e-05"}, {"name": "weight_decay", "val": ": float = 0.0"}, {"name": "adam_beta1", "val": ": float = 0.9"}, {"name": "adam_beta2", "val": ": float = 0.999"}, {"name": "adam_epsilon", "val": ": typing.Optional[float] = 1e-06"}, {"name": "max_grad_norm", "val": ": float = 1.0"}, {"name": "num_train_epochs", "val": ": float = 3.0"}, {"name": "max_steps", "val": ": int = -1"}, {"name": "lr_scheduler_type", "val": ": typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'"}, {"name": "lr_scheduler_kwargs", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = <factory>"}, {"name": "warmup_ratio", "val": ": float = 0.0"}, {"name": "warmup_steps", "val": ": int = 0"}, {"name": "log_level", "val": ": str = 'passive'"}, {"name": "log_level_replica", "val": ": str = 'warning'"}, {"name": "log_on_each_node", "val": ": bool = True"}, {"name": "logging_dir", "val": ": typing.Optional[str] = None"}, {"name": "logging_strategy", "val": ": typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'"}, {"name": "logging_first_step", "val": ": bool = False"}, {"name": "logging_steps", "val": ": float = 500"}, {"name": "logging_nan_inf_filter", "val": ": typing.Optional[bool] = False"}, {"name": "save_strategy", "val": ": typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps'"}, {"name": "save_steps", "val": ": float = 500"}, {"name": "save_total_limit", "val": ": typing.Optional[int] = None"}, {"name": "save_safetensors", "val": ": typing.Optional[bool] = True"}, {"name": "save_on_each_node", "val": ": bool = False"}, {"name": "save_only_model", "val": ": bool = False"}, {"name": "restore_callback_states_from_checkpoint", "val": ": bool = False"}, {"name": "no_cuda", "val": ": bool = False"}, {"name": "use_cpu", "val": ": bool = False"}, {"name": "use_mps_device", "val": ": bool = False"}, {"name": "seed", "val": ": int = 42"}, {"name": "data_seed", "val": ": typing.Optional[int] = None"}, {"name": "jit_mode_eval", "val": ": bool = False"}, {"name": "use_ipex", "val": ": bool = False"}, {"name": "bf16", "val": ": bool = False"}, {"name": "fp16", "val": ": bool = False"}, {"name": "fp16_opt_level", "val": ": str = 'O1'"}, {"name": "half_precision_backend", "val": ": str = 'hpu_amp'"}, {"name": "bf16_full_eval", "val": ": bool = False"}, {"name": "fp16_full_eval", "val": ": bool = False"}, {"name": "tf32", "val": ": typing.Optional[bool] = None"}, {"name": "local_rank", "val": ": int = -1"}, {"name": "ddp_backend", "val": ": typing.Optional[str] = None"}, {"name": "tpu_num_cores", "val": ": typing.Optional[int] = None"}, {"name": "tpu_metrics_debug", "val": ": bool = False"}, {"name": "debug", "val": ": typing.Union[str, list[transformers.debug_utils.DebugOption]] = ''"}, {"name": "dataloader_drop_last", "val": ": bool = False"}, {"name": "eval_steps", "val": ": typing.Optional[float] = None"}, {"name": "dataloader_num_workers", "val": ": int = 0"}, {"name": "dataloader_prefetch_factor", "val": ": typing.Optional[int] = None"}, {"name": "past_index", "val": ": int = -1"}, {"name": "run_name", "val": ": typing.Optional[str] = None"}, {"name": "disable_tqdm", "val": ": typing.Optional[bool] = None"}, {"name": "remove_unused_columns", "val": ": typing.Optional[bool] = True"}, {"name": "label_names", "val": ": typing.Optional[list[str]] = None"}, {"name": "load_best_model_at_end", "val": ": typing.Optional[bool] = False"}, {"name": "metric_for_best_model", "val": ": typing.Optional[str] = None"}, {"name": "greater_is_better", "val": ": typing.Optional[bool] = None"}, {"name": "ignore_data_skip", "val": ": bool = False"}, {"name": "fsdp", "val": ": typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = ''"}, {"name": "fsdp_min_num_params", "val": ": int = 0"}, {"name": "fsdp_config", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = None"}, {"name": "fsdp_transformer_layer_cls_to_wrap", "val": ": typing.Optional[str] = None"}, {"name": "accelerator_config", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "deepspeed", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "label_smoothing_factor", "val": ": float = 0.0"}, {"name": "optim", "val": ": typing.Union[transformers.training_args.OptimizerNames, str, NoneType] = 'adamw_torch'"}, {"name": "optim_args", "val": ": typing.Optional[str] = None"}, {"name": "adafactor", "val": ": bool = False"}, {"name": "group_by_length", "val": ": bool = False"}, {"name": "length_column_name", "val": ": typing.Optional[str] = 'length'"}, {"name": "report_to", "val": ": typing.Union[NoneType, str, list[str]] = None"}, {"name": "ddp_find_unused_parameters", "val": ": typing.Optional[bool] = False"}, {"name": "ddp_bucket_cap_mb", "val": ": typing.Optional[int] = 230"}, {"name": "ddp_broadcast_buffers", "val": ": typing.Optional[bool] = None"}, {"name": "dataloader_pin_memory", "val": ": bool = True"}, {"name": "dataloader_persistent_workers", "val": ": bool = False"}, {"name": "skip_memory_metrics", "val": ": bool = True"}, {"name": "use_legacy_prediction_loop", "val": ": bool = False"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "resume_from_checkpoint", "val": ": typing.Optional[str] = None"}, {"name": "hub_model_id", "val": ": typing.Optional[str] = None"}, {"name": "hub_strategy", "val": ": typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'"}, {"name": "hub_token", "val": ": typing.Optional[str] = None"}, {"name": "hub_private_repo", "val": ": typing.Optional[bool] = None"}, {"name": "hub_always_push", "val": ": bool = False"}, {"name": "hub_revision", "val": ": typing.Optional[str] = None"}, {"name": "gradient_checkpointing", "val": ": bool = False"}, {"name": "gradient_checkpointing_kwargs", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = None"}, {"name": "include_inputs_for_metrics", "val": ": bool = False"}, {"name": "include_for_metrics", "val": ": list = <factory>"}, {"name": "eval_do_concat_batches", "val": ": bool = True"}, {"name": "fp16_backend", "val": ": str = 'auto'"}, {"name": "push_to_hub_model_id", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub_organization", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub_token", "val": ": typing.Optional[str] = None"}, {"name": "mp_parameters", "val": ": str = ''"}, {"name": "auto_find_batch_size", "val": ": bool = False"}, {"name": "full_determinism", "val": ": bool = False"}, {"name": "torchdynamo", "val": ": typing.Optional[str] = None"}, {"name": "ray_scope", "val": ": typing.Optional[str] = 'last'"}, {"name": "ddp_timeout", "val": ": int = 1800"}, {"name": "torch_compile", "val": ": bool = False"}, {"name": "torch_compile_backend", "val": ": typing.Optional[str] = None"}, {"name": "torch_compile_mode", "val": ": typing.Optional[str] = None"}, {"name": "include_tokens_per_second", "val": ": typing.Optional[bool] = False"}, {"name": "include_num_input_tokens_seen", "val": ": typing.Optional[bool] = False"}, {"name": "neftune_noise_alpha", "val": ": typing.Optional[float] = None"}, {"name": "optim_target_modules", "val": ": typing.Union[NoneType, str, list[str]] = None"}, {"name": "batch_eval_metrics", "val": ": bool = False"}, {"name": "eval_on_start", "val": ": bool = False"}, {"name": "use_liger_kernel", "val": ": typing.Optional[bool] = False"}, {"name": "liger_kernel_config", "val": ": typing.Optional[dict[str, bool]] = None"}, {"name": "eval_use_gather_object", "val": ": typing.Optional[bool] = False"}, {"name": "average_tokens_across_devices", "val": ": typing.Optional[bool] = True"}, {"name": "use_habana", "val": ": typing.Optional[bool] = False"}, {"name": "gaudi_config_name", "val": ": typing.Optional[str] = None"}, {"name": "use_lazy_mode", "val": ": typing.Optional[bool] = True"}, {"name": "use_hpu_graphs", "val": ": typing.Optional[bool] = False"}, {"name": "use_hpu_graphs_for_inference", "val": ": typing.Optional[bool] = False"}, {"name": "use_hpu_graphs_for_training", "val": ": typing.Optional[bool] = False"}, {"name": "use_compiled_autograd", "val": ": typing.Optional[bool] = False"}, {"name": "compile_from_sec_iteration", "val": ": typing.Optional[bool] = False"}, {"name": "compile_dynamic", "val": ": typing.Optional[bool] = None"}, {"name": "use_zero3_leaf_promotion", "val": ": typing.Optional[bool] = False"}, {"name": "cache_size_limit", "val": ": typing.Optional[int] = None"}, {"name": "use_regional_compilation", "val": ": typing.Optional[bool] = False"}, {"name": "inline_inbuilt_nn_modules", "val": ": typing.Optional[bool] = None"}, {"name": "allow_unspec_int_on_nn_module", "val": ": typing.Optional[bool] = None"}, {"name": "disable_tensor_cache_hpu_graphs", "val": ": typing.Optional[bool] = False"}, {"name": "max_hpu_graphs", "val": ": typing.Optional[int] = None"}, {"name": "distribution_strategy", "val": ": typing.Optional[str] = 'ddp'"}, {"name": "context_parallel_size", "val": ": typing.Optional[int] = 1"}, {"name": "minimize_memory", "val": ": typing.Optional[bool] = False"}, {"name": "throughput_warmup_steps", "val": ": typing.Optional[int] = 0"}, {"name": "adjust_throughput", "val": ": bool = False"}, {"name": "pipelining_fwd_bwd", "val": ": typing.Optional[bool] = False"}, {"name": "ignore_eos", "val": ": typing.Optional[bool] = True"}, {"name": "non_blocking_data_copy", "val": ": typing.Optional[bool] = False"}, {"name": "profiling_warmup_steps", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_steps", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_warmup_steps_eval", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_steps_eval", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_record_shapes", "val": ": typing.Optional[bool] = True"}, {"name": "profiling_with_stack", "val": ": typing.Optional[bool] = False"}, {"name": "attn_implementation", "val": ": typing.Optional[str] = 'eager'"}, {"name": "flash_attention_recompute", "val": ": bool = False"}, {"name": "flash_attention_fast_softmax", "val": ": bool = False"}, {"name": "flash_attention_causal_mask", "val": ": bool = False"}, {"name": "flash_attention_fp8", "val": ": bool = False"}, {"name": "sdp_on_bf16", "val": ": bool = False"}, {"name": "fp8", "val": ": typing.Optional[bool] = False"}]</parameters><paramsdesc>- **use_habana** (`bool`, *optional*, defaults to `False`) --
  Whether to use Habana's HPU for running the model.
- **gaudi_config_name** (`str`, *optional*) --
  Pretrained Gaudi config name or path.
- **use_lazy_mode** (`bool`, *optional*, defaults to `True`) --
  Whether to use lazy mode for running the model.
- **use_hpu_graphs** (`bool`, *optional*, defaults to `False`) --
  Deprecated, use `use_hpu_graphs_for_inference` instead. Whether to use HPU graphs for performing inference.
- **use_hpu_graphs_for_inference** (`bool`, *optional*, defaults to `False`) --
  Whether to use HPU graphs for performing inference. It will speed up latency but may not be compatible with some operations.
- **use_hpu_graphs_for_training** (`bool`, *optional*, defaults to `False`) --
  Whether to use HPU graphs for performing inference. It will speed up training but may not be compatible with some operations.
- **use_compiled_autograd** (`bool`, *optional*, defaults to `False`) --
  Whether to use compiled autograd for training. Currently only for summarization models.
- **compile_from_sec_iteration** (`bool`, *optional*, defaults to `False`) --
  Whether to torch.compile from the second training iteration.
- **compile_dynamic** (`bool|None`, *optional*, defaults to `None`) --
  Set value of 'dynamic' parameter for torch.compile.
- **use_regional_compilation** (`bool`, *optional*, defaults to `False`) --
  Whether to use regional compile with deepspeed
- **inline_inbuilt_nn_modules** (`bool`, *optional*, defaults to `None`) --
  Set value of 'inline_inbuilt_nn_modules' parameter for torch._dynamo.config. Currently, disabling this parameter improves the performance of the ALBERT model.
- **cache_size_limit(`int`,** *optional*, defaults to 'None') --
  Set value of 'cache_size_limit' parameter for torch._dynamo.config
- **allow_unspec_int_on_nn_module** (`bool`, *optional*, defaults to `None`) --
  Set value of 'allow_unspec_int_on_nn_module' parameter for torch._dynamo.config.
- **disable_tensor_cache_hpu_graphs** (`bool`, *optional*, defaults to `False`) --
  Whether to disable tensor cache when using hpu graphs. If True, tensors won't be cached in hpu graph and memory can be saved.
- **max_hpu_graphs** (`int`, *optional*) --
  Maximum number of hpu graphs to be cached. Reduce to save device memory.
- **distribution_strategy** (`str`, *optional*, defaults to `ddp`) --
  Determines how data parallel distributed training is achieved. May be: `ddp` or `fast_ddp`.
- **throughput_warmup_steps** (`int`, *optional*, defaults to 0) --
  Number of steps to ignore for throughput calculation. For example, with `throughput_warmup_steps=N`,
  the first N steps will not be considered in the calculation of the throughput. This is especially
  useful in lazy mode where the first two or three iterations typically take longer.
- **adjust_throughput** ('bool', *optional*, defaults to `False`) --
  Whether to remove the time taken for logging, evaluating and saving from throughput calculation.
- **pipelining_fwd_bwd** (`bool`, *optional*, defaults to `False`) --
  Whether to add an additional `mark_step` between forward and backward for pipelining
  host backward building and HPU forward computing.
- **non_blocking_data_copy** (`bool`, *optional*, defaults to `False`) --
  Whether to enable async data copy when preparing inputs.
- **profiling_warmup_steps** (`int`, *optional*, defaults to 0) --
  Number of training steps to ignore for profiling.
- **profiling_steps** (`int`, *optional*, defaults to 0) --
  Number of training steps to be captured when enabling profiling.
- **profiling_warmup_steps_eval** (`int`, *optional*, defaults to 0) --
  Number of eval steps to ignore for profiling.
- **profiling_steps_eval** (`int`, *optional*, defaults to 0) --
  Number of eval steps to be captured when enabling profiling.</paramsdesc><paramgroups>0</paramgroups></docstring>

GaudiTrainingArguments is built on top of the Tranformers' [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)
to enable deployment on Habana's Gaudi.




</div>

## GaudiSeq2SeqTrainingArguments[[optimum.habana.GaudiSeq2SeqTrainingArguments]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.GaudiSeq2SeqTrainingArguments</name><anchor>optimum.habana.GaudiSeq2SeqTrainingArguments</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/training_args_seq2seq.py#L30</source><parameters>[{"name": "output_dir", "val": ": typing.Optional[str] = None"}, {"name": "overwrite_output_dir", "val": ": bool = False"}, {"name": "do_train", "val": ": bool = False"}, {"name": "do_eval", "val": ": bool = False"}, {"name": "do_predict", "val": ": bool = False"}, {"name": "eval_strategy", "val": ": typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no'"}, {"name": "prediction_loss_only", "val": ": bool = False"}, {"name": "per_device_train_batch_size", "val": ": int = 8"}, {"name": "per_device_eval_batch_size", "val": ": int = 8"}, {"name": "per_gpu_train_batch_size", "val": ": typing.Optional[int] = None"}, {"name": "per_gpu_eval_batch_size", "val": ": typing.Optional[int] = None"}, {"name": "gradient_accumulation_steps", "val": ": int = 1"}, {"name": "eval_accumulation_steps", "val": ": typing.Optional[int] = None"}, {"name": "eval_delay", "val": ": typing.Optional[float] = 0"}, {"name": "torch_empty_cache_steps", "val": ": typing.Optional[int] = None"}, {"name": "learning_rate", "val": ": float = 5e-05"}, {"name": "weight_decay", "val": ": float = 0.0"}, {"name": "adam_beta1", "val": ": float = 0.9"}, {"name": "adam_beta2", "val": ": float = 0.999"}, {"name": "adam_epsilon", "val": ": typing.Optional[float] = 1e-06"}, {"name": "max_grad_norm", "val": ": float = 1.0"}, {"name": "num_train_epochs", "val": ": float = 3.0"}, {"name": "max_steps", "val": ": int = -1"}, {"name": "lr_scheduler_type", "val": ": typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear'"}, {"name": "lr_scheduler_kwargs", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = <factory>"}, {"name": "warmup_ratio", "val": ": float = 0.0"}, {"name": "warmup_steps", "val": ": int = 0"}, {"name": "log_level", "val": ": str = 'passive'"}, {"name": "log_level_replica", "val": ": str = 'warning'"}, {"name": "log_on_each_node", "val": ": bool = True"}, {"name": "logging_dir", "val": ": typing.Optional[str] = None"}, {"name": "logging_strategy", "val": ": typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps'"}, {"name": "logging_first_step", "val": ": bool = False"}, {"name": "logging_steps", "val": ": float = 500"}, {"name": "logging_nan_inf_filter", "val": ": typing.Optional[bool] = False"}, {"name": "save_strategy", "val": ": typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps'"}, {"name": "save_steps", "val": ": float = 500"}, {"name": "save_total_limit", "val": ": typing.Optional[int] = None"}, {"name": "save_safetensors", "val": ": typing.Optional[bool] = True"}, {"name": "save_on_each_node", "val": ": bool = False"}, {"name": "save_only_model", "val": ": bool = False"}, {"name": "restore_callback_states_from_checkpoint", "val": ": bool = False"}, {"name": "no_cuda", "val": ": bool = False"}, {"name": "use_cpu", "val": ": bool = False"}, {"name": "use_mps_device", "val": ": bool = False"}, {"name": "seed", "val": ": int = 42"}, {"name": "data_seed", "val": ": typing.Optional[int] = None"}, {"name": "jit_mode_eval", "val": ": bool = False"}, {"name": "use_ipex", "val": ": bool = False"}, {"name": "bf16", "val": ": bool = False"}, {"name": "fp16", "val": ": bool = False"}, {"name": "fp16_opt_level", "val": ": str = 'O1'"}, {"name": "half_precision_backend", "val": ": str = 'hpu_amp'"}, {"name": "bf16_full_eval", "val": ": bool = False"}, {"name": "fp16_full_eval", "val": ": bool = False"}, {"name": "tf32", "val": ": typing.Optional[bool] = None"}, {"name": "local_rank", "val": ": int = -1"}, {"name": "ddp_backend", "val": ": typing.Optional[str] = None"}, {"name": "tpu_num_cores", "val": ": typing.Optional[int] = None"}, {"name": "tpu_metrics_debug", "val": ": bool = False"}, {"name": "debug", "val": ": typing.Union[str, list[transformers.debug_utils.DebugOption]] = ''"}, {"name": "dataloader_drop_last", "val": ": bool = False"}, {"name": "eval_steps", "val": ": typing.Optional[float] = None"}, {"name": "dataloader_num_workers", "val": ": int = 0"}, {"name": "dataloader_prefetch_factor", "val": ": typing.Optional[int] = None"}, {"name": "past_index", "val": ": int = -1"}, {"name": "run_name", "val": ": typing.Optional[str] = None"}, {"name": "disable_tqdm", "val": ": typing.Optional[bool] = None"}, {"name": "remove_unused_columns", "val": ": typing.Optional[bool] = True"}, {"name": "label_names", "val": ": typing.Optional[list[str]] = None"}, {"name": "load_best_model_at_end", "val": ": typing.Optional[bool] = False"}, {"name": "metric_for_best_model", "val": ": typing.Optional[str] = None"}, {"name": "greater_is_better", "val": ": typing.Optional[bool] = None"}, {"name": "ignore_data_skip", "val": ": bool = False"}, {"name": "fsdp", "val": ": typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = ''"}, {"name": "fsdp_min_num_params", "val": ": int = 0"}, {"name": "fsdp_config", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = None"}, {"name": "fsdp_transformer_layer_cls_to_wrap", "val": ": typing.Optional[str] = None"}, {"name": "accelerator_config", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "deepspeed", "val": ": typing.Union[dict, str, NoneType] = None"}, {"name": "label_smoothing_factor", "val": ": float = 0.0"}, {"name": "optim", "val": ": typing.Union[transformers.training_args.OptimizerNames, str, NoneType] = 'adamw_torch'"}, {"name": "optim_args", "val": ": typing.Optional[str] = None"}, {"name": "adafactor", "val": ": bool = False"}, {"name": "group_by_length", "val": ": bool = False"}, {"name": "length_column_name", "val": ": typing.Optional[str] = 'length'"}, {"name": "report_to", "val": ": typing.Union[NoneType, str, list[str]] = None"}, {"name": "ddp_find_unused_parameters", "val": ": typing.Optional[bool] = False"}, {"name": "ddp_bucket_cap_mb", "val": ": typing.Optional[int] = 230"}, {"name": "ddp_broadcast_buffers", "val": ": typing.Optional[bool] = None"}, {"name": "dataloader_pin_memory", "val": ": bool = True"}, {"name": "dataloader_persistent_workers", "val": ": bool = False"}, {"name": "skip_memory_metrics", "val": ": bool = True"}, {"name": "use_legacy_prediction_loop", "val": ": bool = False"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "resume_from_checkpoint", "val": ": typing.Optional[str] = None"}, {"name": "hub_model_id", "val": ": typing.Optional[str] = None"}, {"name": "hub_strategy", "val": ": typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save'"}, {"name": "hub_token", "val": ": typing.Optional[str] = None"}, {"name": "hub_private_repo", "val": ": typing.Optional[bool] = None"}, {"name": "hub_always_push", "val": ": bool = False"}, {"name": "hub_revision", "val": ": typing.Optional[str] = None"}, {"name": "gradient_checkpointing", "val": ": bool = False"}, {"name": "gradient_checkpointing_kwargs", "val": ": typing.Union[dict[str, typing.Any], str, NoneType] = None"}, {"name": "include_inputs_for_metrics", "val": ": bool = False"}, {"name": "include_for_metrics", "val": ": list = <factory>"}, {"name": "eval_do_concat_batches", "val": ": bool = True"}, {"name": "fp16_backend", "val": ": str = 'auto'"}, {"name": "push_to_hub_model_id", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub_organization", "val": ": typing.Optional[str] = None"}, {"name": "push_to_hub_token", "val": ": typing.Optional[str] = None"}, {"name": "mp_parameters", "val": ": str = ''"}, {"name": "auto_find_batch_size", "val": ": bool = False"}, {"name": "full_determinism", "val": ": bool = False"}, {"name": "torchdynamo", "val": ": typing.Optional[str] = None"}, {"name": "ray_scope", "val": ": typing.Optional[str] = 'last'"}, {"name": "ddp_timeout", "val": ": int = 1800"}, {"name": "torch_compile", "val": ": bool = False"}, {"name": "torch_compile_backend", "val": ": typing.Optional[str] = None"}, {"name": "torch_compile_mode", "val": ": typing.Optional[str] = None"}, {"name": "include_tokens_per_second", "val": ": typing.Optional[bool] = False"}, {"name": "include_num_input_tokens_seen", "val": ": typing.Optional[bool] = False"}, {"name": "neftune_noise_alpha", "val": ": typing.Optional[float] = None"}, {"name": "optim_target_modules", "val": ": typing.Union[NoneType, str, list[str]] = None"}, {"name": "batch_eval_metrics", "val": ": bool = False"}, {"name": "eval_on_start", "val": ": bool = False"}, {"name": "use_liger_kernel", "val": ": typing.Optional[bool] = False"}, {"name": "liger_kernel_config", "val": ": typing.Optional[dict[str, bool]] = None"}, {"name": "eval_use_gather_object", "val": ": typing.Optional[bool] = False"}, {"name": "average_tokens_across_devices", "val": ": typing.Optional[bool] = True"}, {"name": "use_habana", "val": ": typing.Optional[bool] = False"}, {"name": "gaudi_config_name", "val": ": typing.Optional[str] = None"}, {"name": "use_lazy_mode", "val": ": typing.Optional[bool] = True"}, {"name": "use_hpu_graphs", "val": ": typing.Optional[bool] = False"}, {"name": "use_hpu_graphs_for_inference", "val": ": typing.Optional[bool] = False"}, {"name": "use_hpu_graphs_for_training", "val": ": typing.Optional[bool] = False"}, {"name": "use_compiled_autograd", "val": ": typing.Optional[bool] = False"}, {"name": "compile_from_sec_iteration", "val": ": typing.Optional[bool] = False"}, {"name": "compile_dynamic", "val": ": typing.Optional[bool] = None"}, {"name": "use_zero3_leaf_promotion", "val": ": typing.Optional[bool] = False"}, {"name": "cache_size_limit", "val": ": typing.Optional[int] = None"}, {"name": "use_regional_compilation", "val": ": typing.Optional[bool] = False"}, {"name": "inline_inbuilt_nn_modules", "val": ": typing.Optional[bool] = None"}, {"name": "allow_unspec_int_on_nn_module", "val": ": typing.Optional[bool] = None"}, {"name": "disable_tensor_cache_hpu_graphs", "val": ": typing.Optional[bool] = False"}, {"name": "max_hpu_graphs", "val": ": typing.Optional[int] = None"}, {"name": "distribution_strategy", "val": ": typing.Optional[str] = 'ddp'"}, {"name": "context_parallel_size", "val": ": typing.Optional[int] = 1"}, {"name": "minimize_memory", "val": ": typing.Optional[bool] = False"}, {"name": "throughput_warmup_steps", "val": ": typing.Optional[int] = 0"}, {"name": "adjust_throughput", "val": ": bool = False"}, {"name": "pipelining_fwd_bwd", "val": ": typing.Optional[bool] = False"}, {"name": "ignore_eos", "val": ": typing.Optional[bool] = True"}, {"name": "non_blocking_data_copy", "val": ": typing.Optional[bool] = False"}, {"name": "profiling_warmup_steps", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_steps", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_warmup_steps_eval", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_steps_eval", "val": ": typing.Optional[int] = 0"}, {"name": "profiling_record_shapes", "val": ": typing.Optional[bool] = True"}, {"name": "profiling_with_stack", "val": ": typing.Optional[bool] = False"}, {"name": "attn_implementation", "val": ": typing.Optional[str] = 'eager'"}, {"name": "flash_attention_recompute", "val": ": bool = False"}, {"name": "flash_attention_fast_softmax", "val": ": bool = False"}, {"name": "flash_attention_causal_mask", "val": ": bool = False"}, {"name": "flash_attention_fp8", "val": ": bool = False"}, {"name": "sdp_on_bf16", "val": ": bool = False"}, {"name": "fp8", "val": ": typing.Optional[bool] = False"}, {"name": "sortish_sampler", "val": ": bool = False"}, {"name": "predict_with_generate", "val": ": bool = False"}, {"name": "generation_max_length", "val": ": typing.Optional[int] = None"}, {"name": "generation_num_beams", "val": ": typing.Optional[int] = None"}, {"name": "generation_config", "val": ": typing.Union[str, pathlib.Path, optimum.habana.transformers.generation.configuration_utils.GaudiGenerationConfig, NoneType] = None"}]</parameters><paramsdesc>- **predict_with_generate** (`bool`, *optional*, defaults to `False`) --
  Whether to use generate to calculate generative metrics (ROUGE, BLEU).
- **generation_max_length** (`int`, *optional*) --
  The `max_length` to use on each evaluation loop when `predict_with_generate=True`. Will default to the
  `max_length` value of the model configuration.
- **generation_num_beams** (`int`, *optional*) --
  The `num_beams` to use on each evaluation loop when `predict_with_generate=True`. Will default to the
  `num_beams` value of the model configuration.
- **generation_config** (`str` or `Path` or `transformers.generation.GenerationConfig`, *optional*) --
  Allows to load a `transformers.generation.GenerationConfig` from the `from_pretrained` method. This can be either:

  - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
    huggingface.co.
  - a path to a *directory* containing a configuration file saved using the
    [transformers.GenerationConfig.save_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/text_generation#transformers.GenerationConfig.save_pretrained) method, e.g., `./my_model_directory/`.
  - a `transformers.generation.GenerationConfig` object.</paramsdesc><paramgroups>0</paramgroups></docstring>

GaudiSeq2SeqTrainingArguments is built on top of the Tranformers' [Seq2SeqTrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments)
to enable deployment on Habana's Gaudi.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>optimum.habana.GaudiSeq2SeqTrainingArguments.to_dict</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/training_args_seq2seq.py#L83</source><parameters>[]</parameters></docstring>

Serializes this instance while replace `Enum` by their values and `GaudiGenerationConfig` by dictionaries (for JSON
serialization support). It obfuscates the token values by removing their value.


</div></div>

<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/package_reference/trainer.mdx" />

### GaudiConfig
https://huggingface.co/docs/optimum.habana/v1.19.0/package_reference/gaudi_config.md

# GaudiConfig

To define a configuration for a specific workload you can use `GaudiConfig` class.

Here is a description of each configuration parameter:
- `use_fused_adam` controls whether to use the [custom fused implementation of the ADAM optimizer provided by Intel® Gaudi® AI Accelerator](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Custom_Ops_PyTorch.html#custom-optimizers).
- `use_fused_clip_norm` controls whether to use the [custom fused implementation of gradient norm clipping provided by Intel® Gaudi® AI Accelerator](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Custom_Ops_PyTorch.html#other-custom-ops).
- `use_torch_autocast` controls whether to enable PyTorch autocast; used to define good pre-defined config; users should favor `--bf16` training argument
- `use_dynamic_shapes` controls whether to enable dynamic shapes support when processing input dataset
- `autocast_bf16_ops` list of operations that should be run with bf16 precision under autocast context; using environment flag LOWER_LIST is a preferred way for operator autocast list override
- `autocast_fp32_ops` list of operations that should be run with fp32 precision under autocast context; using environment flag FP32_LIST is a preferred way for operator autocast list override

Parameter values of this class can be set from an external JSON file.

You can find examples of Gaudi configurations in the [Intel Gaudi model repository on the Hugging Face Hub](https://huggingface.co/habana).
For instance, [for BERT Large we have](https://huggingface.co/Habana/bert-large-uncased-whole-word-masking/blob/main/gaudi_config.json):
```JSON
{
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "use_torch_autocast": true
}
```

More advanced configuration file [for Stable Diffusion 2](https://huggingface.co/Habana/stable-diffusion-2/blob/main/gaudi_config.json):
```JSON
{
  "use_torch_autocast": true,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "autocast_bf16_ops": [
    "_convolution.deprecated",
    "_convolution",
    "conv1d",
    "conv2d",
    "conv3d",
    "conv_tbc",
    "conv_transpose1d",
    "conv_transpose2d.input",
    "conv_transpose3d.input",
    "convolution",
    "prelu",
    "addmm",
    "addmv",
    "addr",
    "matmul",
    "einsum",
    "mm",
    "mv",
    "silu",
    "linear",
    "addbmm",
    "baddbmm",
    "bmm",
    "chain_matmul",
    "linalg_multi_dot",
    "layer_norm",
    "group_norm"
  ],
  "autocast_fp32_ops": [
    "acos",
    "asin",
    "cosh",
    "erfinv",
    "exp",
    "expm1",
    "log",
    "log10",
    "log2",
    "log1p",
    "reciprocal",
    "rsqrt",
    "sinh",
    "tan",
    "pow.Tensor_Scalar",
    "pow.Tensor_Tensor",
    "pow.Scalar",
    "softplus",
    "frobenius_norm",
    "frobenius_norm.dim",
    "nuclear_norm",
    "nuclear_norm.dim",
    "cosine_similarity",
    "poisson_nll_loss",
    "cosine_embedding_loss",
    "nll_loss",
    "nll_loss2d",
    "hinge_embedding_loss",
    "kl_div",
    "l1_loss",
    "smooth_l1_loss",
    "huber_loss",
    "mse_loss",
    "margin_ranking_loss",
    "multilabel_margin_loss",
    "soft_margin_loss",
    "triplet_margin_loss",
    "multi_margin_loss",
    "binary_cross_entropy_with_logits",
    "dist",
    "pdist",
    "cdist",
    "renorm",
    "logsumexp"
  ]
}
```

To instantiate yourself a Gaudi configuration in your script, you can do the following
```python
from optimum.habana import GaudiConfig

gaudi_config = GaudiConfig.from_pretrained(
    gaudi_config_name,
    cache_dir=model_args.cache_dir,
    revision=model_args.model_revision,
    token=model_args.token,
)
```
and pass it to the trainer with the `gaudi_config` argument.


## GaudiConfig[[optimum.habana.GaudiConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.habana.GaudiConfig</name><anchor>optimum.habana.GaudiConfig</anchor><source>https://github.com/huggingface/optimum-habana/blob/v1.19.0/optimum/habana/transformers/gaudi_configuration.py#L51</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/package_reference/gaudi_config.mdx" />

### Accelerating Training
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/accelerate_training.md

# Accelerating Training

Gaudi offers several possibilities to make training faster.
They are all compatible with each other and can be coupled with [distributed training](https://huggingface.co/docs/optimum/habana/usage_guides/distributed).


## Execution Modes

The following execution modes are supported:
- *Lazy mode*, where operations are accumulated in a graph whose execution is triggered in a lazy manner.
  This allows the graph compiler to optimize the device execution for these operations.
- *Eager mode*, where one operation at a time is executed.
- *Eager mode* with *torch.compile*, where a model (or part of a model) is enclosed into a graph.

<Tip  warning={true}>

Not all models are yet supported with Eager mode and Eager mode with torch.compile (still in development).
Lazy mode is the default mode.

</Tip>

In lazy mode, the graph compiler generates optimized binary code that implements the given model topology on Gaudi. It performs operator fusion, data layout management, parallelization, pipelining and memory management, as well as graph-level optimizations.

To execute your training in lazy mode, you must provide the following training arguments:
```python
args = GaudiTrainingArguments(
    # same arguments as in Transformers,
    use_habana=True,
    use_lazy_mode=True,
    gaudi_config_name=path_to_my_gaudi_config
)
```

<Tip>

In lazy mode, the last batch is filled with extra samples by default so that it has the same dimensions as previous batches.
This enables to avoid extra graph compilations during training.
You can also discard the last batch with `dataloader_drop_last=True`.

</Tip>

<Tip>

In lazy mode, the first two or three training iterations may be slower due to graph compilations.
To not take them into account in the computation of the throughput at the end of the training, you can add the following training argument: `throughput_warmup_steps=3`.

</Tip>


## Mixed-Precision Training

Mixed-precision training enables to compute some operations using lighter data types to accelerate training.
Optimum for Intel Gaudi enables mixed precision training in a similar fashion as 🤗 Transformers:
- argument `--bf16` enables usage of PyTorch autocast
- argument `--half_precision_backend [hpu_amp, cpu_amp]` is used to specify a device on which mixed precision operations should be performed


<Tip warning={true}>

Please refer to the [advanced autocast usage on Gaudi](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/Autocast.html) for more informations regarding:
- default autocast operations
- default autocast operations override

</Tip>


## HPU Graphs

The flexibility of PyTorch comes at a price - usually the same pythonic logic is processed every training step over and over.
This may lead to a situation where it takes longer for CPU to schedule the work on Gaudi than it is effectively computed by it.
To cope with such host-bound workloads, you may want to try enabling the _HPU Graphs_ feature, which records the computational graph once, then only triggers it for execution much faster multiple times.

To do so, specify `--use_hpu_graphs_for_training True`.
This option will wrap the model in [`habana_frameworks.torch.hpu.ModuleCacher`](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/HPU_Graphs_Training.html#training-loop-with-modulecacher), which automatically records _HPU Graphs_ on the model's usage.

For multi-worker distributed training, you also need to specify `--distribution_strategy fast_ddp`.
This option replaces the usage of `torch.nn.parallel.DistributedDataParallel` with much simpler and usually faster `optimum.habana.distributed.all_reduce_gradients`.

<Tip warning={true}>

Use with caution: currently using HPU Graphs for training may not support all the possible cases.
However, the potential performance gain could be dramatic!

</Tip>


## Fast DDP

For distributed training on several devices, you can also specify `--distribution_strategy fast_ddp`.
This option replaces the usage of `torch.nn.parallel.DistributedDataParallel` with much simpler and usually faster `optimum.habana.distributed.all_reduce_gradients`.


## Pipelining Forward and Backward Passes

There are two stages when running models on Intel Gaudi HPU: python code interpretation on CPU and HPU recipe computation.
The HPU computation stage can be triggered manually or when a copy to the CPU is requested, and generally HPU computation is triggered after `loss.backward()` to make the CPU code interpretation and HPU recipe computation overlap as shown in the following illustration:

```
CPU:...forward + backward   ...optimizer  ...forward + backward   ...optimizer  ...
HPU:........................forward + backward...optimizer......forward + backward...optimizer
```

However, when CPU code interpretation takes longer than HPU computation, it becomes the bottleneck and HPU computation can not be triggered until CPU code interpretation is done.
So one potential optimization for such cases is to trigger the HPU *forward* computation right after the CPU *forward* interpretation and before the CPU *backward* interpretation.
You can see an example below where the CPU *backward* interpretation overlaps with the HPU *forward* computation:

```
CPU:...forward   ...backward   ...optimizer  ...forward   ...backward   ...optimizer   ...
HPU:.............forward.......backward......optimizer......forward.....backward.......optimizer
```

To enable this optimization, you can set the following training argument `--pipelining_fwd_bwd True`.

**We recommend using it on Gaudi2** as the host will often be the bottleneck.
You should be able to see a speedup on first-generation Gaudi too, but it will be less significant than on Gaudi2 because your run is more likely to be HPU-bound.

Furthermore, *when training models that require large device memory*, we suggest disabling this optimization because *it will increase the HPU memory usage*.


## Use More Workers for Data Loading

If the workload of the data loader is heavy, you can increase the number of workers to make your run faster.
You can enable this with the training argument [`--dataloader_num_workers N`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.dataloader_num_workers) with `N` being the number of workers to use.

**We recommend using it with datasets containing images.**
Besides, using `--dataloader_num_workers 1` should help in most cases as it enables data loading in a thread different from the main one.


## Non-Blocking Data Copy

This optimization is well-suited for models with a high cost of copying data from the host to the device (e.g. vision models like ViT or Swin).
You can enable it with the training argument `--non_blocking_data_copy True`.

**We recommend using it on Gaudi2** where the host can continue to execute other tasks (e.g. graph building) to get a better pipelining between the host and the device.
On first-generation Gaudi, the device executing time is longer so one should not expect to get any speedup.


## Custom Operators

Intel Gaudi provides a few custom operators that achieve better performance than their PyTorch counterparts on Gaudi.
You can also define your own custom operator for Gaudi as described [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_CustomOp_API/page_index.html).


### Fused ADAM

Intel Gaudi offers a [custom fused ADAM implementation](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Custom_Ops_PyTorch.html#custom-optimizers).
It can be used by specifying `"use_fused_adam": true` in the Gaudi configuration file.

<Tip warning={true}>

The default value of *epsilon* is `1e-6` for the Intel Gaudi fused ADAM optimizer, while it is `1e-8` for `torch.optim.AdamW`.

</Tip>


### Fused Gradient Norm Clipping

Intel Gaudi provides a [custom gradient norm clipping implementation](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Custom_Ops_PyTorch.html#other-custom-ops).
It can be used by specifying `"use_fused_clip_norm": true` in the Gaudi configuration file.

### Gaudi Optimized Flash Attention

Flash attention algorithm with additional Intel® Gaudi® AI Accelerator optimizetions is supported for both Lazy and Eager mode.
See [Using Fused Scaled Dot Product Attention (FusedSDPA)](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Optimization_in_PyTorch_Models.html#using-fused-scaled-dot-product-attention-fusedsdpa). 

## Tracking Memory Usage

Live memory statistics are displayed every `logging_steps` (default is 500) steps:
- `memory_allocated (GB)` refers to the *current* memory consumption in GB,
- `max_memory_allocated (GB)` refers to the *maximum* memory consumption reached during the run in GB,
- `total_memory_available (GB)` refers to the *total* memory available on the device in GB.

These metrics can help you to adjust the batch size of your runs.

<Tip warning={true}>

In distributed mode, memory stats are communicated only by the main process.

</Tip>

You can take a look at [Intel Gaudi AI Accelerator's official documentation](https://docs.habana.ai/en/latest/PyTorch/PyTorch_User_Guide/Python_Packages.html#memory-stats-apis) for more information about the memory stats API.


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/accelerate_training.mdx" />

### Overview
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/overview.md

# Overview

Welcome to the Optimum for Intel® Gaudi® AI Accelerator how-to guides!

These guides tackle more advanced topics and will show you how to easily get the best from HPUs.
Here's what you'll find:

- [Script adaptation](./script_adaptation): Learn how to adapt a Transformers/Diffusers script for Intel Gaudi
- [Pretraining models](./pretraining): A guide to pretraining a model using Transformers
- [Accelerating training](./accelerate_training): Discover techniques to speed up training
- [Accelerating inference](./accelerate_inference) Learn how to optimize inference for faster execution
- [Using DeepSpeed](./deepspeed): Scale your training to handle larger models
- [Multi-node training](./multi_node_training): Speed up runs with multi-node setups
- [Quantization](./quantization): Explore FP8 and UINT4 quantization for optimized inference


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/overview.mdx" />

### Comparing HPU-Optimized `safe_softmax` with Native PyTorch `safe_softmax`
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/safe_softmax.md

# Comparing HPU-Optimized `safe_softmax` with Native PyTorch `safe_softmax`

This article demonstrates how to benchmark and compare the performance of the Habana Processing Unit (HPU)-optimized `safe_softmax` operation against the native PyTorch implementation. The provided Python script guides you through the process step-by-step, with detailed explanations for each part. Additionally, we will provide some context about `safe_softmax`, its purpose, and its use cases.

---

## Important Note: No Special Setup Required

The `safe_softmax` operation works out-of-the-box in PyTorch. When running your code on Habana hardware, the HPU-optimized implementation is automatically utilized without any additional configuration. This seamless integration allows you to benefit from performance improvements without modifying your existing code.

---

## What is `safe_softmax`?

The `softmax` function is a common operation in machine learning, particularly in classification tasks. It converts raw logits into probabilities by applying the exponential function and normalizing the results. However, the standard `softmax` can encounter numerical instability when dealing with very large or very small values in the input tensor, leading to overflow or underflow issues.

To address this, `safe_softmax` is implemented. It stabilizes the computation by subtracting the maximum value in each row (or along the specified dimension) from the logits before applying the exponential function. This ensures that the largest value in the exponent is zero, preventing overflow.

### Why is `safe_softmax` important?

- **Numerical Stability**: Prevents overflow/underflow issues during computation.
- **Widely Used**: Commonly used in neural networks, especially in the final layer for classification tasks.
- **Efficiency**: Optimized implementations can significantly improve performance on specialized hardware like GPUs or HPUs.

---

## Step-by-Step Explanation of the Code

### 1. **Importing Required Libraries**

```python
import torch
import timeit
import habana_frameworks.torch as ht
from torch._decomp.decompositions import safe_softmax as native_safe_softmax
```

- **`torch`**: The core PyTorch library for tensor operations.
- **`timeit`**: A Python module for measuring execution time.
- **`habana_frameworks.torch`**: Provides support for Habana hardware (HPUs).
- **`safe_softmax`**: The native PyTorch implementation of `safe_softmax` is imported for comparison.

---

### 2. **Defining the HPU-Optimized `safe_softmax`**

```python
hpu_safe_softmax = torch.ops.aten._safe_softmax.default
```

- The HPU-optimized version of `safe_softmax` is accessed via the `torch.ops.aten` namespace. This implementation is specifically designed to leverage the Habana hardware for faster execution.

---

### 3. **Preparing the Input Tensor**

```python
input_tensor = torch.tensor([[1.0, 2.0, float("-inf")], [3.0, 4.0, 5.0]]).to("hpu")
```

- A 2D tensor is created with some typical values, including `-inf` to simulate edge cases.
- The tensor is moved to the HPU device using `.to("hpu")`.

---

### 4. **Warmup for Fair Benchmarking**

```python
hpu_safe_softmax(input_tensor, dim=1); ht.hpu.synchronize()
native_safe_softmax(input_tensor, dim=1); ht.hpu.synchronize()
```

- Both the HPU-optimized and native implementations are executed once before benchmarking. This ensures that any initialization overhead is excluded from the timing measurements.
- `ht.hpu.synchronize()` ensures that all HPU operations are completed before proceeding.

---

### 5. **Benchmarking the Implementations**

```python
num_iterations = 10000
hpu_time = timeit.timeit(
    "hpu_safe_softmax(input_tensor, dim=1); ht.hpu.synchronize()",
    globals=globals(),
    number=num_iterations
)
native_time = timeit.timeit(
    "native_safe_softmax(input_tensor, dim=1); ht.hpu.synchronize()",
    globals=globals(),
    number=num_iterations
)
```

- The `timeit` module is used to measure the execution time of each implementation over 10,000 iterations.
- The `globals=globals()` argument allows the `timeit` module to access the defined variables and functions in the script.

---

### 6. **Printing the Results**

```python
print(f"Performance comparison over {num_iterations} iterations:")
print(f"Native safe_softmax: {native_time:.6f} seconds")
print(f"HPU safe_softmax: {hpu_time:.6f} seconds")
```

- The execution times for both implementations are printed, allowing for a direct comparison of their performance.

---

## Example Output

After running the script, you might see output similar to the following (lower is better):

```
Performance comparison over 10000 iterations:
Native safe_softmax: 1.004057 seconds
HPU safe_softmax: 0.104004 seconds
```

<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/safe_softmax.mdx" />

### Adapt a Transformers/Diffusers script to Intel Gaudi
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/script_adaptation.md

# Adapt a Transformers/Diffusers script to Intel Gaudi

🤗 Optimum for Intel Gaudi features HPU-optimized support for many of the latest 🤗 Transformers and Diffusers models.
To convert a script to use model optimized for a Gaudi device, simple adaptation can be performed.

## Transformers

Here is how to do a transformers script adaptation for Intel Gaudi:
```diff
- from transformers import Trainer, TrainingArguments
+ from optimum.habana import GaudiTrainer, GaudiTrainingArguments

# Define the training arguments
- training_args = TrainingArguments(
+ training_args = GaudiTrainingArguments(
+   use_habana=True,
+   use_lazy_mode=True,
+   gaudi_config_name=gaudi_config_name,
  ...
)

# Initialize our Trainer
- trainer = Trainer(
+ trainer = GaudiTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset
    ... # other arguments
)
```

where `gaudi_config_name` is the name of a model from the [Hub](https://huggingface.co/Habana) a path to a local Gaudi configuration file.
Gaudi configurations are stored as JSON files in model repositories but you can write your own.
More information can be found [here](../package_reference/gaudi_config).

## Diffusers

🤗 Optimum for Intel Gaudi also features HPU-optimized support for the 🤗 Diffusers library.
Thus, you can easily deploy Stable Diffusion on Gaudi for performing text-to-image generation.

Here is how to use it and the differences with the 🤗 Diffusers library:
```diff
- from diffusers import DDIMScheduler, StableDiffusionPipeline
+ from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline


model_name = "runwayml/stable-diffusion-v1-5"

- scheduler = DDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
+ scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

- pipeline = StableDiffusionPipeline.from_pretrained(
+ pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
+   use_habana=True,
+   use_hpu_graphs=True,
+   gaudi_config="Habana/stable-diffusion",
)

outputs = pipeline(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=16,
+   batch_size=4,
)
```


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/script_adaptation.mdx" />

### Accelerating Inference
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/accelerate_inference.md

# Accelerating Inference

Intel Gaudi offers several possibilities to make inference faster.


## Lazy Mode

The following execution modes are supported:
- *Lazy mode*, where operations are accumulated in a graph whose execution is triggered in a lazy manner.
  This allows the graph compiler to optimize the device execution for these operations.
- *Eager mode*, where one operation at a time is executed.
- *Eager mode* with *torch.compile*, where a model (or part of a model) is enclosed into a graph.

<Tip  warning={true}>

Not all models are yet supported with Eager mode and Eager mode with torch.compile (still in development).
Lazy mode is the default mode.

</Tip>

In lazy mode, the graph compiler generates optimized binary code that implements the given model topology on Gaudi.
It performs operator fusion, data layout management, parallelization, pipelining and memory management, as well as graph-level optimizations.

To execute inference in lazy mode, you must provide the following arguments:
```python
args = GaudiTrainingArguments(
    # same arguments as in Transformers,
    use_habana=True,
    use_lazy_mode=True,
)
```

<Tip>

In lazy mode, the last batch may trigger an extra compilation because it could be smaller than previous batches.
To avoid this, you can discard the last batch with `dataloader_drop_last=True`.

</Tip>


## HPU Graphs

Gaudi provides a way to run fast inference with HPU Graphs.
It consists in capturing a series of operations (i.e. graphs) in an HPU stream and then replaying them in an optimized way (more information [here](https://docs.habana.ai/en/latest/PyTorch/Inference_on_Gaudi/Inference_using_HPU_Graphs/Inference_using_HPU_Graphs.html)).
Thus, you can apply this to the `forward` method of your model to run it efficiently at inference.

HPU Graphs are integrated into the `GaudiTrainer` and the `GaudiStableDiffusionPipeline` so that one can use them very easily:
- `GaudiTrainer` needs the training argument `use_hpu_graphs_for_inference` to be set to `True` as follows:
```python
from optimum.habana import GaudiTrainer, GaudiTrainingArguments

# define the training arguments
training_args = GaudiTrainingArguments(
    use_habana=True,
    use_lazy_mode=True,
    use_hpu_graphs_for_inference=True,
    gaudi_config_name=gaudi_config_name,
    ...
)

# Initialize our Trainer
trainer = GaudiTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset
    ... # other arguments
)
```
- `GaudiStableDiffusionPipeline` needs its argument `use_hpu_graphs` to be set to `True` such as:
```python
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline

model_name = "CompVis/stable-diffusion-v1-4"

scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion",
)

outputs = generator(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=16,
    batch_size=4,
)
```

<Tip warning={true}>

With HPU Graphs and in lazy mode, the *first couple of iterations* may be slower due to graph compilations.

</Tip>


## Custom Operators

Intel Gaudi provides a few custom operators that achieve better performance than their PyTorch counterparts.
You can also define your own custom operator for Gaudi as described [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_CustomOp_API/page_index.html).


### Gaudi Optimized Flash Attention

Flash attention algorithm with additional Intel Gaudi AI Accelerator optimizations is supported for both Lazy and Eager mode.
See [Using Fused Scaled Dot Product Attention (FusedSDPA)](https://docs.habana.ai/en/latest/PyTorch/Model_Optimization_PyTorch/Optimization_in_PyTorch_Models.html#using-fused-scaled-dot-product-attention-fusedsdpa). 


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/accelerate_inference.mdx" />

### Quantization
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/quantization.md

# Quantization

Intel® Gaudi® offers several possibilities to make inference faster. For examples of FP8 and UINT4 for Inference, see the
[text-generation](/examples/text-generation) example.

This guide provides the steps required to enable FP8 and UINT4 precision on your Intel® Gaudi® AI
accelerator using the Intel® Neural Compressor (INC) package.

## Run Inference Using FP8

When running inference on large language models (LLMs), high memory usage is often the bottleneck. Therefore
using FP8 data type for inference on large language models halves the required memory bandwidth. In addition,
FP8 compute is twice as fast as BF16 compute, so even compute-bound workloads, such as offline inference on
large batch sizes benefit.

References to [Run Inference Using FP8](https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_FP8.html)
section on [Intel® Gaudi® AI Accelerator Documentation](https://docs.habana.ai/en/latest/index.html).

## Run Inference Using UINT4

When running inference on large language models (LLMs), high memory usage is often the bottleneck. Therefore,
using UINT4 data type for inference on large language models halves the required memory bandwidth compared to
running inference in FP8.

References to [Run Inference Using UINT4](https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_UINT4.html)
section on [Intel® Gaudi® AI Accelerator Documentation](https://docs.habana.ai/en/latest/index.html).


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/quantization.mdx" />

### Pretraining Transformers with Optimum for Intel Gaudi
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/pretraining.md

# Pretraining Transformers with Optimum for Intel Gaudi

Pretraining a model from Transformers, like BERT, is as easy as fine-tuning it.
The model should be instantiated from a configuration with `.from_config` and not from a pretrained checkpoint with `.from_pretrained`.
Here is how it should look with GPT2 for instance:
```python
from transformers import AutoConfig, AutoModelForXXX

config = AutoConfig.from_pretrained("gpt2")
model = AutoModelForXXX.from_config(config)
```
with XXX the task to perform, such as `ImageClassification` for example.

The following is a working example where BERT is pretrained for masked language modeling:
```python
from datasets import load_dataset
from optimum.habana import GaudiTrainer, GaudiTrainingArguments
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling

# Load the training set (this one has already been preprocessed)
training_set = load_dataset("philschmid/processed_bert_dataset", split="train")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("philschmid/bert-base-uncased-2022-habana")

# Instantiate an untrained model
config = AutoConfig.from_pretrained("bert-base-uncased")
model = AutoModelForMaskedLM.from_config(config)

model.resize_token_embeddings(len(tokenizer))

# The data collator will take care of randomly masking the tokens
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer)

training_args = GaudiTrainingArguments(
    output_dir="/tmp/bert-base-uncased-mlm",
    num_train_epochs=1,
    per_device_train_batch_size=8,
    use_habana=True,
    use_lazy_mode=True,
    gaudi_config_name="Habana/bert-base-uncased",
)

# Initialize our Trainer
trainer = GaudiTrainer(
    model=model,
    args=training_args,
    train_dataset=training_set,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

trainer.train()
```

You can see another example of pretraining in [this blog post](https://huggingface.co/blog/pretraining-bert).


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/pretraining.mdx" />

### Multi-node Training
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/multi_node_training.md

# Multi-node Training

Using several Gaudi servers to perform multi-node training can be done easily. This guide shows how to:
- set up several Gaudi instances
- set up your computing environment
- launch a multi-node run


## Setting up several Gaudi instances

Two types of configurations are possible:
- scale-out using Gaudi NICs or Host NICs (on-premises)
- scale-out using Intel® Tiber™ AI Cloud instances


### On premises

To set up your servers on premises, check out the [installation](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html) and [distributed training](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Scaling_Guide/index.html) pages of Intel® Gaudi® AI Accelerator's documentation.


### Intel Tiber AI Cloud instances

Follow the steps on [creating an account and getting an instance](https://docs.habana.ai/en/latest/Intel_DevCloud_Quick_Start/Intel_DevCloud_Quick_Start.html#creating-an-account-and-getting-an-instance) pages of Intel® Gaudi® AI Accelerator's documentation.


## Launching a Multi-node Run

Once your Intel Gaudi instances are ready, follow the steps for [setting up a multi-server environment](https://docs.habana.ai/en/latest/Intel_DevCloud_Quick_Start/Intel_DevCloud_Quick_Start.html#setting-up-a-multi-server-environment) pages of Intel® Gaudi® AI Accelerator's documentation.


Finally, there are two possible ways to run your training script on several nodes:

1. With the [`gaudi_spawn.py`](https://github.com/huggingface/optimum-habana/blob/main/examples/gaudi_spawn.py) script, you can run the following command:
```bash
python gaudi_spawn.py \
    --hostfile path_to_my_hostfile --use_deepspeed \
    path_to_my_script.py --args1 --args2 ... --argsN \
    --deepspeed path_to_my_deepspeed_config
```
where `--argX` is an argument of the script to run.

2. With the `DistributedRunner`, you can add this code snippet to a script:
```python
from optimum.habana.distributed import DistributedRunner

distributed_runner = DistributedRunner(
    command_list=["path_to_my_script.py --args1 --args2 ... --argsN"],
    hostfile=path_to_my_hostfile,
    use_deepspeed=True,
)
```


## Environment Variables

If you need to set environment variables for all nodes, you can specify them in a [`.deepspeed_env`](https://www.deepspeed.ai/getting-started/#multi-node-environment-variables) file which should be located in the local path you are executing from or in your home directory. The format is the following:
```
env_variable_1_name=value
env_variable_2_name=value
...
```


## Recommendations

- It is strongly recommended to use gradient checkpointing for multi-node runs to get the highest speedups. You can enable it with `--gradient_checkpointing` in [these examples](/examples) or with `gradient_checkpointing=True` in your `GaudiTrainingArguments`.
- Larger batch sizes should lead to higher speedups.
- Multi-node inference is not recommended and can provide inconsistent results.
- On Intel Tiber AI Cloud instances, run your Docker containers with the `--privileged` flag so that EFA devices are visible.


## Example

In this example, we fine-tune a pre-trained GPT2-XL model on the [WikiText dataset](https://huggingface.co/datasets/wikitext).
We are going to use the [causal language modeling example which is given in the Github repository](/examples/language-modeling#gpt-2gpt-and-causal-language-modeling).

The first step consists in training the model on several nodes with this command:
```bash
PT_HPU_LAZY_MODE=1 python ../gaudi_spawn.py \
    --hostfile path_to_hostfile --use_deepspeed run_clm.py \
    --model_name_or_path gpt2-xl \
    --gaudi_config_name Habana/gpt2 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_train \
    --output_dir /tmp/gpt2_xl_multi_node \
    --learning_rate 4e-04 \
    --per_device_train_batch_size 16 \
    --gradient_checkpointing \
    --num_train_epochs 1 \
    --use_habana \
    --use_lazy_mode \
    --throughput_warmup_steps 3 \
    --deepspeed path_to_deepspeed_config
```

Evaluation is not performed in the same command because we do not recommend performing multi-node inference at the moment.

Once the model is trained, we can evaluate it with the following command.
The argument `--model_name_or_path` should be equal to the argument `--output_dir` of the previous command.
```bash
PT_HPU_LAZY_MODE=1 python run_clm.py \
    --model_name_or_path /tmp/gpt2_xl_multi_node \
    --gaudi_config_name Habana/gpt2 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_eval \
    --output_dir /tmp/gpt2_xl_multi_node \
    --per_device_eval_batch_size 8 \
    --use_habana \
    --use_lazy_mode
```


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/multi_node_training.mdx" />

### DeepSpeed for HPUs
https://huggingface.co/docs/optimum.habana/v1.19.0/usage_guides/deepspeed.md

# DeepSpeed for HPUs

[DeepSpeed](https://www.deepspeed.ai/) enables you to fit and train larger models on HPUs thanks to various optimizations described in the [ZeRO paper](https://arxiv.org/abs/1910.02054).
In particular, you can use the two following ZeRO configurations that have been validated to be fully functioning with Gaudi:
- **ZeRO-1**: partitions the optimizer states across processes.
- **ZeRO-2**: partitions the optimizer states + gradients across processes.
- **ZeRO-3**: ZeRO-2 + full model state is partitioned across the processes.

These configurations are fully compatible with Intel Gaudi Mixed Precision and can thus be used to train your model in *bf16* precision.

You can find more information about DeepSpeed Gaudi integration [here](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/DeepSpeed_User_Guide/DeepSpeed_User_Guide.html#deepspeed-user-guide).


## Setup

To use DeepSpeed on Gaudi, you need to install Optimum for Intel Gaudi and [DeepSpeed fork for Intel Gaudi](https://github.com/HabanaAI/DeepSpeed) with:
```bash
pip install optimum[habana]
pip install git+https://github.com/HabanaAI/DeepSpeed.git@1.22.0
```


## Using DeepSpeed with Optimum for Intel Gaudi

The `GaudiTrainer` allows using DeepSpeed as easily as the [Transformers Trainer](https://huggingface.co/docs/transformers/main_classes/trainer).
This can be done in 3 steps:
1. A DeepSpeed configuration has to be defined.
2. The `deepspeed` training argument enables to specify the path to the DeepSpeed configuration.
3. The `deepspeed` launcher must be used to run your script.

These steps are detailed below.
A comprehensive guide about how to use DeepSpeed with the Transformers Trainer is also available [here](https://huggingface.co/docs/transformers/main_classes/deepspeed).


### DeepSpeed configuration

The DeepSpeed configuration to use is passed through a JSON file and enables you to choose the optimizations to apply.
Here is an example for applying ZeRO-2 optimizations and *bf16* precision:
```json
{
    "steps_per_print": 64,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "gradient_accumulation_steps": "auto",
    "bf16": {
        "enabled": true
    },
    "gradient_clipping": 1.0,
    "zero_optimization": {
        "stage": 2,
        "overlap_comm": false,
        "reduce_scatter": false,
        "contiguous_gradients": false
    }
}
```

<Tip>

The special value `"auto"` enables to automatically get the correct or most efficient value.
You can also specify the values yourself but, if you do so, you should be careful not to have conflicting values with your training arguments.
It is strongly advised to read [this section](https://huggingface.co/docs/transformers/main_classes/deepspeed#shared-configuration) in the Transformers documentation to completely understand how this works.

</Tip>

Other examples of configurations for HPUs are proposed [here](https://github.com/HabanaAI/Model-References/tree/1.22.0/PyTorch/nlp/DeepSpeedExamples/deepspeed-bert/scripts) by Intel.

The [Transformers documentation](https://huggingface.co/docs/transformers/main_classes/deepspeed#configuration) explains how to write a configuration from scratch very well.
A more complete description of all configuration possibilities is available [here](https://www.deepspeed.ai/docs/config-json/).


### The `deepspeed` training argument

To use DeepSpeed, you must specify `deespeed=path_to_my_deepspeed_configuration` in your `GaudiTrainingArguments` instance:
```python
training_args = GaudiTrainingArguments(
    # my usual training arguments...
    use_habana=True,
    use_lazy_mode=True,
    gaudi_config_name=path_to_my_gaudi_config,
    deepspeed=path_to_my_deepspeed_config,
)
```

This argument both indicates that DeepSpeed should be used and points to your DeepSpeed configuration.


### Launching your script

Finally, there are two possible ways to launch your script:

1. Using the [gaudi_spawn.py](https://github.com/huggingface/optimum-habana/blob/main/examples/gaudi_spawn.py) script:

```bash
python gaudi_spawn.py \
    --world_size number_of_hpu_you_have --use_deepspeed \
    path_to_script.py --args1 --args2 ... --argsN \
    --deepspeed path_to_deepspeed_config
```
where `--argX` is an argument of the script to run with DeepSpeed.

2. Using the `DistributedRunner` directly in code:

```python
from optimum.habana.distributed import DistributedRunner
from optimum.utils import logging

world_size=8 # Number of HPUs to use (1 or 8)

# define distributed runner
distributed_runner = DistributedRunner(
    command_list=["scripts/train.py --args1 --args2 ... --argsN --deepspeed path_to_deepspeed_config"],
    world_size=world_size,
    use_deepspeed=True,
)

# start job
ret_code = distributed_runner.run()
```

<Tip warning={true}>

You should set `"use_fused_adam": false` in your Gaudi configuration because it is not compatible with DeepSpeed yet.

</Tip>


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/usage_guides/deepspeed.mdx" />

### TGI on Gaudi
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/tgi.md

# TGI on Gaudi

Text Generation Inference (TGI) on Intel® Gaudi® AI Accelerator is supported via [Intel® Gaudi® TGI repository](https://github.com/huggingface/tgi-gaudi).
Start TGI service on Gaudi system simply by [pulling a TGI Gaudi Docker image](https://github.com/huggingface/tgi-gaudi/pkgs/container/tgi-gaudi) and launching a local TGI service instance.

For example, TGI service on Gaudi for *Llama 2 7B* model can be started with:
```bash
docker run \
  -p 8080:80 \
  -v $PWD/data:/data \
  --runtime=habana \
  -e HABANA_VISIBLE_DEVICES=all \
  -e OMPI_MCA_btl_vader_single_copy_mechanism=none \
  --cap-add=sys_nice \
  --ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.1 \
  --model-id meta-llama/Llama-2-7b-hf \
  --max-input-tokens 1024 \
  --max-total-tokens 2048
```

You can then send a simple request:
```bash
curl 127.0.0.1:8080/generate \
  -X POST \
  -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":32}}' \
  -H 'Content-Type: application/json'
```

To run static benchmark test, please refer to
[TGI's benchmark tool](https://github.com/huggingface/text-generation-inference/tree/main/benchmark).
More examples of running the service instances on single or multi HPU device system are available
[here](https://github.com/huggingface/tgi-gaudi?tab=readme-ov-file#running-tgi-on-gaudi).


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/tgi.mdx" />

### Single-HPU Training
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/single_hpu.md

# Single-HPU Training

Training on a single device is as simple as in Transformers:
- You need to replace the Transformers' [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) class with the [`GaudiTrainer`](https://huggingface.co/docs/optimum/habana/package_reference/trainer) class,
- You need to replace the Transformers' [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) class with the [GaudiTrainingArguments](/docs/optimum.habana/v1.19.0/en/package_reference/trainer#optimum.habana.GaudiTrainingArguments) class and add the following arguments:
    - `use_habana` to execute your script on an HPU,
    - `use_lazy_mode` to use lazy mode (recommended) or not (i.e. eager mode),
    - `gaudi_config_name` to give the name of (Hub) or the path to (local) your Gaudi configuration file.

To go further, we invite you to read our guides about [accelerating training](../usage_guides/accelerate_training) and [pretraining](../usage_guides/pretraining).


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/single_hpu.mdx" />

### Run Inference
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/inference.md

# Run Inference

This section shows how to run inference-only workloads on Intel Gaudi accelerator.

An effective quick start would be to review the inference examples provided in the Optimum for Intel Gaudi
[here].

You can also explore the 
[examples in the Optimum for Intel Gaudi repository]((/examples)).
While the examples folder includes both training and inference, the inference-specific content
provides valuable guidance for optimizing and running workloads on Intel Gaudi accelerators.

For more advanced information about how to speed up inference, check out [this guide](../usage_guides/accelerate_inference).


## With GaudiTrainer

You can find below a template to perform inference with a `GaudiTrainer` instance where we want to compute the accuracy over the given dataset:

```python
import evaluate

metric = evaluate.load("accuracy")

# You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a
# predictions and label_ids field) and has to return a dictionary string to float.
def my_compute_metrics(p):
    return metric.compute(predictions=np.argmax(p.predictions, axis=1), references=p.label_ids)

# Trainer initialization
trainer = GaudiTrainer(
        model=my_model,
        gaudi_config=my_gaudi_config,
        args=my_args,
        train_dataset=None,
        eval_dataset=eval_dataset,
        compute_metrics=my_compute_metrics,
        tokenizer=my_tokenizer,
        data_collator=my_data_collator,
    )

# Run inference
metrics = trainer.evaluate()
```

The variable `my_args` should contain some inference-specific arguments, you can take a look [here](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments.set_evaluate) to see the arguments that can be interesting to set for inference.


## In our Examples

All [our examples](/examples) contain instructions for running inference with a given model on a given dataset.
The reasoning is the same for every example: run the example script with `--do_eval` and `--per_device_eval_batch_size` and without `--do_train`.
A simple template is the following:
```bash
PT_HPU_LAZY_MODE=1 python path_to_the_example_script \
  --model_name_or_path my_model_name \
  --gaudi_config_name my_gaudi_config_name \
  --dataset_name my_dataset_name \
  --do_eval \
  --per_device_eval_batch_size my_batch_size \
  --output_dir path_to_my_output_dir \
  --use_habana \
  --use_lazy_mode \
  --use_hpu_graphs_for_inference
```


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/inference.mdx" />

### Stable Diffusion
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/stable_diffusion.md

# Stable Diffusion

Stable Diffusion is a text-to-image latent diffusion model.
Check out this [blog post](https://huggingface.co/blog/stable_diffusion) for more information.


## How to generate images?

To generate images with Stable Diffusion on Gaudi, you need to instantiate two instances:
- A pipeline with `GaudiStableDiffusionPipeline`. This pipeline supports *text-to-image generation*.
- A scheduler with `GaudiDDIMScheduler`. This scheduler has been optimized for Gaudi.

When initializing the pipeline, you have to specify `use_habana=True` to deploy it on HPUs.
Furthermore, to get the fastest possible generations you should enable **HPU graphs** with `use_hpu_graphs=True`.
Finally, you will need to specify a [Gaudi configuration](../package_reference/gaudi_config) which can be downloaded from the Hugging Face Hub.

```python
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline

model_name = "CompVis/stable-diffusion-v1-4"

scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion",
)
```

You can then call the pipeline to generate images from one or several prompts:
```python
outputs = pipeline(
    prompt=["High quality photo of an astronaut riding a horse in space", "Face of a yellow cat, high resolution, sitting on a park bench"],
    num_images_per_prompt=10,
    batch_size=4,
    output_type="pil",
)
```

Generated images can be returned as either PIL images or NumPy arrays, depending on the `output_type` option.

<Tip>

Check out the [example](/examples/stable-diffusion) provided in the official Github repository.

</Tip>


## Stable Diffusion 2

[Stable Diffusion 2](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_2) can be used with the exact same classes.
Here is an example:

```python
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline

model_name = "stabilityai/stable-diffusion-2-1"

scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")

pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    model_name,
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion-2",
)

outputs = pipeline(
    ["An image of a squirrel in Picasso style"],
    num_images_per_prompt=10,
    batch_size=2,
    height=768,
    width=768,
)
```

<Tip>

There are two different checkpoints for Stable Diffusion 2:

- use [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) for generating 768x768 images
- use [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) for generating 512x512 images

</Tip>


## Super-resolution

The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4.

See [here](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/upscale) for more information.

### How to upscale low resolution images?

To generate RGB and depth images with Stable Diffusion Upscale on Gaudi, you need to instantiate two instances:
- A pipeline with `GaudiStableDiffusionUpscalePipeline`.
- A scheduler with `GaudiDDIMScheduler`. This scheduler has been optimized for Gaudi.

When initializing the pipeline, you have to specify `use_habana=True` to deploy it on HPUs.
Furthermore, to get the fastest possible generations you should enable **HPU graphs** with `use_hpu_graphs=True`.
Finally, you will need to specify a [Gaudi configuration](../package_reference/gaudi_config) which can be downloaded from the Hugging Face Hub.

```python
import requests
from io import BytesIO
from optimum.habana.diffusers import (
    GaudiDDIMScheduler,
    GaudiStableDiffusionUpscalePipeline,
)
from optimum.habana.utils import set_seed
from PIL import Image

set_seed(42)

model_name_upscale = "stabilityai/stable-diffusion-x4-upscaler"
scheduler = GaudiDDIMScheduler.from_pretrained(model_name_upscale, subfolder="scheduler")
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
response = requests.get(url)
low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
low_res_img = low_res_img.resize((128, 128))
low_res_img.save("low_res_cat.png")
prompt = "a white cat"

pipeline = GaudiStableDiffusionUpscalePipeline.from_pretrained(
    model_name_upscale,
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion",
)
upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
upscaled_image.save("upsampled_cat.png")

```


## Tips

To accelerate your Stable Diffusion pipeline, you can run it in full *bfloat16* precision.
This will also save memory.
You just need to pass `torch_dtype=torch.bfloat16` to `from_pretrained` when instantiating your pipeline.
Here is how to do it:

```python
import torch

pipeline = GaudiStableDiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    scheduler=scheduler,
    use_habana=True,
    use_hpu_graphs=True,
    gaudi_config="Habana/stable-diffusion",
    torch_dtype=torch.bfloat16
)
```


## Textual Inversion Fine-Tuning

[Textual Inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like Stable Diffusion on your own images using just 3-5 examples.

You can find [here](https://github.com/huggingface/optimum-habana/blob/main/examples/stable-diffusion/textual_inversion.py) an example script that implements this training method.


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/stable_diffusion.mdx" />

### Overview
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/overview.md

# Overview

Welcome to the 🤗 Optimum for Intel Gaudi tutorials!
They will help you to get started quickly on the following topics:
- How to [train a model on a single device](./single_hpu)
- How to [train a model on several devices](./distributed)
- How to [run inference with your model](./inference)
- How to [generate images from text with Stable Diffusion](./stable_diffusion)
- How to [run TGI service on Gaudi](./tgi)


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/overview.mdx" />

### Distributed training with Optimum for Intel Gaudi
https://huggingface.co/docs/optimum.habana/v1.19.0/tutorials/distributed.md

# Distributed training with Optimum for Intel Gaudi

As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude.

All the [PyTorch examples](/examples) and the `GaudiTrainer` script work out of the box with distributed training.
There are two ways of launching them:

1. Using the [gaudi_spawn.py](https://github.com/huggingface/optimum-habana/blob/main/examples/gaudi_spawn.py) script:

   - Use MPI for distributed training:

     ```bash
     python gaudi_spawn.py \
         --world_size number_of_hpu_you_have --use_mpi \
         path_to_script.py --args1 --args2 ... --argsN
     ```

     where `--argX` is an argument of the script to run in a distributed way.
     Examples are given for question answering [here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/README.md#multi-card-training) and text classification [here](/examples/text-classification#multi-card-training).

   - Use DeepSpeed for distributed training:

     ```bash
     python gaudi_spawn.py \
         --world_size number_of_hpu_you_have --use_deepspeed \
         path_to_script.py --args1 --args2 ... --argsN
     ```

     where `--argX` is an argument of the script to run in a distributed way.
     Examples are given for question answering [here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/README.md#using-deepspeed) and text classification [here](/examples/text-classification#using-deepspeed).

2. Using the `DistributedRunner` directly in code:

   ```python
   from optimum.habana.distributed import DistributedRunner
   from optimum.utils import logging

   world_size=8 # Number of HPUs to use (1 or 8)

   # define distributed runner
   distributed_runner = DistributedRunner(
       command_list=["scripts/train.py --args1 --args2 ... --argsN"],
       world_size=world_size,
       use_mpi=True,
   )

   # start job
   ret_code = distributed_runner.run()
   ```

<Tip>

You can set the training argument `--distribution_strategy fast_ddp` for simpler and usually faster distributed training management. More information [here](../usage_guides/accelerate_training#fast-ddp).

</Tip>

To go further, we invite you to read our guides about:
- [Accelerating training](../usage_guides/accelerate_training)
- [Pretraining](../usage_guides/pretraining)
- [DeepSpeed](../usage_guides/deepspeed) to train bigger models
- [Multi-node training](../usage_guides/multi_node_training) to speed up even more your distributed runs


<EditOnGithub source="https://github.com/huggingface/optimum-habana/blob/main/docs/source/tutorials/distributed.mdx" />
