# Optimum

## Docs

- [🤗 Optimum Nvidia](https://huggingface.co/docs/optimum/main/nvidia_overview.md)
- [Installation](https://huggingface.co/docs/optimum/main/installation.md)
- [Quick tour](https://huggingface.co/docs/optimum/main/quicktour.md)
- [🤗 Optimum notebooks](https://huggingface.co/docs/optimum/main/notebooks.md)
- [🤗 Optimum](https://huggingface.co/docs/optimum/main/index.md)
- [🤗 Optimum Furiosa](https://huggingface.co/docs/optimum/main/furiosa_overview.md)
- [Quantization](https://huggingface.co/docs/optimum/main/llm_quantization/usage_guides/quantization.md)
- [Normalized Configurations](https://huggingface.co/docs/optimum/main/utils/normalized_config.md)
- [Dummy Input Generators](https://huggingface.co/docs/optimum/main/utils/dummy_input_generators.md)
- [Overview](https://huggingface.co/docs/optimum/main/torch_fx/overview.md)
- [Optimization](https://huggingface.co/docs/optimum/main/torch_fx/usage_guides/optimization.md)
- [Optimization](https://huggingface.co/docs/optimum/main/torch_fx/package_reference/optimization.md)
- [Symbolic tracer](https://huggingface.co/docs/optimum/main/torch_fx/concept_guides/symbolic_tracer.md)
- [Overview](https://huggingface.co/docs/optimum/main/exporters/overview.md)
- [The Tasks Manager](https://huggingface.co/docs/optimum/main/exporters/task_manager.md)
- [Quantization](https://huggingface.co/docs/optimum/main/concept_guides/quantization.md)

### 🤗 Optimum Nvidia
https://huggingface.co/docs/optimum/main/nvidia_overview.md

# 🤗 Optimum Nvidia

Find more information about 🤗 Optimum Nvidia [here](https://github.com/huggingface/optimum-nvidia).

### Installation
https://huggingface.co/docs/optimum/main/installation.md

# Installation

🤗 Optimum can be installed using `pip` as follows:

```bash
python -m pip install optimum
```

If you'd like to use the accelerator-specific features of 🤗 Optimum, you can install the required dependencies according to the table below:

| Accelerator                                                                                                            | Installation                                                      |
|:-----------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------|
| [ONNX Runtime](https://huggingface.co/docs/optimum/onnxruntime/overview)                                               | `pip install --upgrade --upgrade-strategy eager optimum[onnxruntime]`       |
| [Intel Neural Compressor](https://huggingface.co/docs/optimum/intel/index)                                             | `pip install --upgrade --upgrade-strategy eager optimum[neural-compressor]` |
| [OpenVINO](https://huggingface.co/docs/optimum/intel/index)                                                            | `pip install --upgrade --upgrade-strategy eager optimum[openvino]`          |
| [IPEX](https://huggingface.co/docs/optimum/intel/index)                                                                | `pip install --upgrade --upgrade-strategy eager optimum[ipex]`              |
| [NVIDIA TensorRT-LLM](https://huggingface.co/docs/optimum/main/en/nvidia_overview)                                     | `docker run -it --gpus all --ipc host huggingface/optimum-nvidia`           |
| [AMD Instinct GPUs and Ryzen AI NPU](https://huggingface.co/docs/optimum/amd/index)                                    | `pip install --upgrade --upgrade-strategy eager optimum[amd]`               |
| [AWS Trainum & Inferentia](https://huggingface.co/docs/optimum-neuron/index)                                           | `pip install --upgrade --upgrade-strategy eager optimum[neuronx]`           |
| [Habana Gaudi Processor (HPU)](https://huggingface.co/docs/optimum/habana/index)                                       | `pip install --upgrade --upgrade-strategy eager optimum[habana]`            |
| [FuriosaAI](https://huggingface.co/docs/optimum/furiosa/index)                                                         | `pip install --upgrade --upgrade-strategy eager optimum[furiosa]`           |

The `--upgrade --upgrade-strategy eager` option is needed to ensure the different packages are upgraded to the latest possible version.

If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you can install the base library from source as follows:

```bash
python -m pip install git+https://github.com/huggingface/optimum.git
```

For the accelerator-specific features, you can install them by appending `optimum[accelerator_type]` to the `pip` command, e.g.

```bash
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git
```

### Quick tour
https://huggingface.co/docs/optimum/main/quicktour.md

# Quick tour

This quick tour is intended for developers who are ready to dive into the code and see examples of how to integrate 🤗 Optimum into their model training and inference workflows.

## Accelerated inference

#### OpenVINO

To load a model and run inference with OpenVINO Runtime, you can just replace your `AutoModelForXxx` class with the corresponding `OVModelForXxx` class.
If you want to load a PyTorch checkpoint, set `export=True` to convert your model to the OpenVINO IR (Intermediate Representation).

```diff
- from transformers import AutoModelForSequenceClassification
+ from optimum.intel.openvino import OVModelForSequenceClassification
  from transformers import AutoTokenizer, pipeline

  # Download a tokenizer and model from the Hub and convert to OpenVINO format
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  model_id = "distilbert-base-uncased-finetuned-sst-2-english"
- model = AutoModelForSequenceClassification.from_pretrained(model_id)
+ model = OVModelForSequenceClassification.from_pretrained(model_id, export=True)

  # Run inference!
  classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
  results = classifier("He's a dreadful magician.")
```

You can find more examples in the [documentation](https://huggingface.co/docs/optimum/intel/inference) and in the [examples](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino).


#### ONNX Runtime

To accelerate inference with ONNX Runtime, 🤗 Optimum uses _configuration objects_ to define parameters for graph optimization and quantization. These objects are then used to instantiate dedicated _optimizers_ and _quantizers_.

Before applying quantization or optimization, first we need to load our model. To load a model and run inference with ONNX Runtime, you can just replace the canonical Transformers [`AutoModelForXxx`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModel) class with the corresponding [`ORTModelForXxx`](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort#optimum.onnxruntime.ORTModel) class. If you want to load from a PyTorch checkpoint, set `export=True` to export your model to the ONNX format.

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer

>>> model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_directory = "tmp/onnx/"

>>> # Load a model from transformers and export it to ONNX
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)

>>> # Save the ONNX model and tokenizer
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
```

Let's see now how we can apply dynamic quantization with ONNX Runtime:

```python
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig
>>> from optimum.onnxruntime import ORTQuantizer

>>> # Define the quantization methodology
>>> qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)

>>> # Apply dynamic quantization on the model
>>> quantizer.quantize(save_dir=save_directory, quantization_config=qconfig)
```

In this example, we've quantized a model from the Hugging Face Hub, in the same manner we can quantize a model hosted locally by providing the path to the directory containing the model weights. The result from applying the `quantize()` method is a `model_quantized.onnx` file that can be used to run inference. Here's an example of how to load an ONNX Runtime model and generate predictions with it:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import pipeline, AutoTokenizer

>>> model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model_quantized.onnx")
>>> tokenizer = AutoTokenizer.from_pretrained(save_directory)
>>> classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
>>> results = classifier("I love burritos!")
```

You can find more examples in the [documentation](https://huggingface.co/docs/optimum/onnxruntime/quickstart) and in the [examples](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime).


## Accelerated training

#### Habana

To train transformers on Habana's Gaudi processors, 🤗 Optimum provides a `GaudiTrainer` that is very similar to the 🤗 Transformers [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer). Here is a simple example:

```diff
- from transformers import Trainer, TrainingArguments
+ from optimum.habana import GaudiTrainer, GaudiTrainingArguments

  # Download a pretrained model from the Hub
  model = AutoModelForXxx.from_pretrained("bert-base-uncased")

  # Define the training arguments
- training_args = TrainingArguments(
+ training_args = GaudiTrainingArguments(
      output_dir="path/to/save/folder/",
+     use_habana=True,
+     use_lazy_mode=True,
+     gaudi_config_name="Habana/bert-base-uncased",
      ...
  )

  # Initialize the trainer
- trainer = Trainer(
+ trainer = GaudiTrainer(
      model=model,
      args=training_args,
      train_dataset=train_dataset,
      ...
  )

  # Use Habana Gaudi processor for training!
  trainer.train()
```

You can find more examples in the [documentation](https://huggingface.co/docs/optimum/habana/quickstart) and in the [examples](https://github.com/huggingface/optimum-habana/tree/main/examples).

## Out of the box ONNX export

The Optimum library handles out of the box the ONNX export of Transformers and Diffusers models!

Exporting a model to ONNX is as simple as

```bash
optimum-cli export onnx --model gpt2 gpt2_onnx/
```

Check out the help for more options:

```bash
optimum-cli export onnx --help
```

Check out the [documentation](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model) for more.

## `torch.fx` integration

Optimum integrates with `torch.fx`, providing as a one-liner several graph transformations. We aim at supporting a better management of [quantization](https://huggingface.co/docs/optimum/concept_guides/quantization) through `torch.fx`, both for quantization-aware training (QAT) and post-training quantization (PTQ).

Check out the [documentation](https://huggingface.co/docs/optimum/torch_fx/usage_guides/optimization) and [reference](https://huggingface.co/docs/optimum/torch_fx/package_reference/optimization) for more!

### 🤗 Optimum notebooks
https://huggingface.co/docs/optimum/main/notebooks.md

# 🤗 Optimum notebooks

You can find here a list of the notebooks associated with each accelerator in 🤗 Optimum.

## Optimum Habana

| Notebook                                                                                                                                                                               | Description                                                                                                                                                                       |  Colab                                                                                                                                                                                                          |        Studio Lab                                                                                                                                                                                                   |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [How to use DeepSpeed to train models with billions of parameters on Habana Gaudi](https://github.com/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) | Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi. |  [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) |

## Optimum Intel

### OpenVINO

| Notebook                                                                                                                                                                               | Description                                                                                                                                                                       |                                     Colab                                                                                                                                                                                                          |        Studio Lab                                                                                                                                                                                                   |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [How to run inference with OpenVINO](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb) | Explains how to export your model to OpenVINO and run inference with OpenVINO Runtime on various tasks| [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb)|
| [How to quantize a question answering model with NNCF](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb) | Show how to apply post-training quantization on a question answering model using [NNCF](https://github.com/openvinotoolkit/nncf) and to accelerate inference with OpenVINO| [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb)|


### Neural Compressor

| Notebook                                                                                                                                                                               | Description                                                                                                                                                                       |                                     Colab                                                                                                                                                                                                          |        Studio Lab                                                                                                                                                                                                   |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) | Show how to apply quantization while training your model using Intel [Neural Compressor](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) |


## Optimum ONNX Runtime

| Notebook                                                                                                                                                                    | Description                                                                                                                                    |                                                                        Colab                                                                                                                                                                                                          |        Studio Lab                                                                                                                                                                                                   |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) | Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) |
| [How to fine-tune a model for text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)             | Show how to DistilBERT model on GLUE tasks using [ONNX Runtime](https://github.com/microsoft/onnxruntime).                                     | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)          | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb) |
| [How to fine-tune a model for summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)                         | Show how to fine-tune a T5 model on the BBC news corpus.                                                                                       | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)                |                [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb) |
| [How to fine-tune DeBERTa for question-answering with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb)                         | Show how to fine-tune a DeBERTa model on the squad.                                                                                       | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb)                |                [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb) |

### 🤗 Optimum
https://huggingface.co/docs/optimum/main/index.md

# 🤗 Optimum

🤗 Optimum is an extension of [Transformers](https://huggingface.co/docs/transformers) that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency.

The AI ecosystem evolves quickly, and more and more specialized hardware along with their own optimizations are emerging every day.
As such, Optimum enables developers to efficiently use any of these platforms with the same ease inherent to Transformers.

🤗 Optimum is distributed as a collection of packages - check out the links below for an in-depth look at each one.


## Hardware partners

The packages below enable you to get the best of the 🤗 Hugging Face ecosystem on various types of devices.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-4 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://github.com/huggingface/optimum-nvidia"
      ><div class="w-full text-center bg-gradient-to-br from-green-600 to-green-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">NVIDIA</div>
      <p class="text-gray-700">Accelerate inference with NVIDIA TensorRT-LLM on the <span class="underline" onclick="event.preventDefault(); window.open('https://developer.nvidia.com/blog/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus/', '_blank');">NVIDIA platform</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./amd/index"
      ><div class="w-full text-center bg-gradient-to-br from-red-600 to-red-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">AMD</div>
      <p class="text-gray-700">Enable performance optimizations for <span class="underline" onclick="event.preventDefault(); window.open('https://www.amd.com/en/graphics/instinct-server-accelerators', '_blank');">AMD Instinct GPUs</span> and <span class="underline" onclick="event.preventDefault(); window.open('https://ryzenai.docs.amd.com/en/latest/index.html', '_blank');">AMD Ryzen AI NPUs</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./intel/index"
      ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Intel</div>
      <p class="text-gray-700">Optimize your model to speedup inference with <span class="underline" onclick="event.preventDefault(); window.open('https://docs.openvino.ai/latest/index.html', '_blank');">OpenVINO</span> , <span class="underline" onclick="event.preventDefault(); window.open('https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html', '_blank');">Neural Compressor</span> and <span class="underline" onclick="event.preventDefault(); window.open('https://intel.github.io/intel-extension-for-pytorch/index.html', '_blank');">IPEX</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://huggingface.co/docs/optimum-neuron/index"
      ><div class="w-full text-center bg-gradient-to-br from-orange-400 to-orange-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">AWS Trainium/Inferentia</div>
      <p class="text-gray-700">Accelerate your training and inference workflows with <span class="underline" onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/trainium/', '_blank');">AWS Trainium</span> and <span class="underline" onclick="event.preventDefault(); window.open('https://aws.amazon.com/machine-learning/inferentia/', '_blank');">AWS Inferentia</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://huggingface.co/docs/optimum-tpu/index"
      ><div class="w-full text-center bg-gradient-to-br from-blue-500 to-blue-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Google TPUs</div>
      <p class="text-gray-700">Accelerate your training and inference workflows with <span class="underline" onclick="event.preventDefault(); window.open('https://cloud.google.com/tpu', '_blank');">Google TPUs</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./habana/index"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Habana</div>
      <p class="text-gray-700">Maximize training throughput and efficiency with <span class="underline" onclick="event.preventDefault(); window.open('https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html', '_blank');">Habana's Gaudi processor</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./furiosa/index"
      ><div class="w-full text-center bg-gradient-to-br from-green-400 to-green-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">FuriosaAI</div>
      <p class="text-gray-700">Fast and efficient inference on <span class="underline" onclick="event.preventDefault(); window.open('https://www.furiosa.ai/', '_blank');">FuriosaAI WARBOY</span></p>
    </a>
  </div>
</div>

## Open-source integrations

🤗 Optimum also supports a variety of open-source frameworks to make model optimization very easy.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://huggingface.co/docs/optimum-onnx/index"
      ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ONNX Runtime</div>
      <p class="text-gray-700">Apply quantization and graph optimization to accelerate Transformers models training and inference with <span class="underline" onclick="event.preventDefault(); window.open('https://onnxruntime.ai/', '_blank');">ONNX Runtime</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://github.com/huggingface/optimum-executorch"
      ><div class="w-full text-center bg-gradient-to-br from-red-500 to-red-600 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ExecuTorch</div>
      <p class="text-gray-700">PyTorch’s native solution to inference on the Edge via <span class="underline" onclick="event.preventDefault(); window.open('https://pytorch.org/executorch/stable/', '_blank');">ExecuTorch</span></p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./exporters/overview"
      ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Exporters</div>
      <p class="text-gray-700">Export your PyTorch model to different formats such as ONNX</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./torch_fx/overview"
      ><div class="w-full text-center bg-gradient-to-br from-green-400 to-green-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Torch FX</div>
      <p class="text-gray-700">Create and compose custom graph transformations to optimize PyTorch Transformers models with <span class="underline" onclick="event.preventDefault(); window.open('https://pytorch.org/docs/stable/fx.html#', '_blank');">Torch FX</span></p>
    </a>
  </div>
</div>

### 🤗 Optimum Furiosa
https://huggingface.co/docs/optimum/main/furiosa_overview.md

# 🤗 Optimum Furiosa

Find more information about 🤗 Optimum Furiosa [here](https://github.com/huggingface/optimum-furiosa).

### Quantization
https://huggingface.co/docs/optimum/main/llm_quantization/usage_guides/quantization.md

# Quantization

## AutoGPTQ Integration

🤗 Optimum collaborated with [AutoGPTQ library](https://github.com/PanQiWei/AutoGPTQ) to provide a simple API that apply GPTQ quantization on language models. With GPTQ quantization, you can quantize your favorite language model to 8, 4, 3 or even 2 bits. This comes without a big drop of performance and with faster inference speed. This is supported by most GPU hardwares.

If you want to quantize 🤗 Transformers models with GPTQ, follow this [documentation](https://huggingface.co/docs/transformers/main_classes/quantization).

To learn more about the quantization technique used in GPTQ, please refer to:
- the [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) paper
- the [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) library used as the backend

Note that the AutoGPTQ library provides more advanced usage (triton backend, fused attention, fused MLP) that are not integrated with Optimum. For now, we leverage only the CUDA kernel for GPTQ.

### Requirements

You need to have the following requirements installed to run the code below:

- AutoGPTQ library:
`pip install auto-gptq`

- Optimum library:
`pip install --upgrade optimum`

- Install latest `transformers` library from source:
`pip install --upgrade git+https://github.com/huggingface/transformers.git`

- Install latest `accelerate` library:
`pip install --upgrade accelerate`

### Load and quantize a model

The `GPTQQuantizer` class is used to quantize your model. In order to quantize your model, you need to provide a few arguments:
- the number of bits: `bits`
- the dataset used to calibrate the quantization: `dataset`
- the model sequence length used to process the dataset: `model_seqlen`
- the block name to quantize: `block_name_to_quantize`

With 🤗 Transformers integration, you don't need to pass the `block_name_to_quantize` and `model_seqlen` as we can retrieve them. However, for custom model, you need to specify them. Also, make sure that your model is converted to `torch.float16` before quantization.

```py
from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.gptq import GPTQQuantizer, load_quantized_model
import torch
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)

quantizer = GPTQQuantizer(bits=4, dataset="c4", block_name_to_quantize = "model.decoder.layers", model_seqlen = 2048)
quantized_model = quantizer.quantize_model(model, tokenizer)
```

<Tip warning={true}>
GPTQ quantization only works for text model for now. Furthermore, the quantization process can take a lot of time depending on one's hardware (175B model = 4 gpu hours using NVIDIA A100). Please check on the Hugging Face Hub if there is not already a GPTQ quantized version of the model you would like to quantize.
</Tip>

### Save the model

To save your model, use the save method from `GPTQQuantizer` class. It will create a folder with your model state dict along with the quantization config.
```python
save_folder = "/path/to/save_folder/"
quantizer.save(model,save_folder)
```

### Load quantized weights

You can load your quantized weights by using the `load_quantized_model()` function.
Through the Accelerate library, it is possible to load a model faster with a lower memory usage. The model needs to be initialized using empty weights, with weights loaded as a next step.
```python
from accelerate import init_empty_weights
with init_empty_weights():
    empty_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
empty_model.tie_weights()
quantized_model = load_quantized_model(empty_model, save_folder=save_folder, device_map="auto")
```

### Exllama kernels for faster inference

With the release of exllamav2 kernels, you can get faster inference speed compared to exllama kernels for 4-bit model. It is activated by default: `disable_exllamav2=False` in `load_quantized_model()`. In order to use these kernels, you need to have the entire model on gpus.

```py
from optimum.gptq import GPTQQuantizer, load_quantized_model
import torch

from accelerate import init_empty_weights
with init_empty_weights():
    empty_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
empty_model.tie_weights()
quantized_model = load_quantized_model(empty_model, save_folder=save_folder, device_map="auto")
```

If you wish to use exllama kernels, you will have to change the version by setting `exllama_config`:

```py
from optimum.gptq import GPTQQuantizer, load_quantized_model
import torch

from accelerate import init_empty_weights
with init_empty_weights():
    empty_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
empty_model.tie_weights()
quantized_model = load_quantized_model(empty_model, save_folder=save_folder, device_map="auto", exllama_config = {"version":1})
```

Note that only 4-bit models are supported with exllama/exllamav2 kernels for now. Furthermore, it is recommended to disable exllama/exllamav2 kernels when you are finetuning your model with peft.

You can find the benchmark of these kernels [here](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)

#### Fine-tune a quantized model

With the official support of adapters in the Hugging Face ecosystem, you can fine-tune models that have been quantized with GPTQ.
Please have a look at [`peft`](https://github.com/huggingface/peft) library for more details.

### Normalized Configurations
https://huggingface.co/docs/optimum/main/utils/normalized_config.md

# Normalized Configurations

Model configuration classes in 🤗 Transformers are not standardized. Although Transformers implements an `attribute_map` attribute that mitigates the issue to some extent, it does not make it easy to reason on common configuration attributes in the code.
[NormalizedConfig](/docs/optimum/main/en/utils/normalized_config#optimum.utils.NormalizedConfig) classes try to fix that by allowing access to the configuration
attribute they wrap in a standardized way.


## Base class[[optimum.utils.NormalizedConfig]]

<Tip>

While it is possible to create `NormalizedConfig` subclasses for common use-cases, it is also possible to overwrite
the `original attribute name -> normalized attribute name` mapping directly using the
`with_args()` class method.

</Tip>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.NormalizedConfig</name><anchor>optimum.utils.NormalizedConfig</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/normalized_config.py#L25</source><parameters>[{"name": "config", "val": ": typing.Union[ForwardRef('PretrainedConfig'), typing.Dict]"}, {"name": "allow_new", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`PretrainedConfig`) --
  The config to normalize.</paramsdesc><paramgroups>0</paramgroups></docstring>

Handles the normalization of `PretrainedConfig` attribute names, allowing to access attributes in a general way.




</div>

## Existing normalized configurations[[optimum.utils.NormalizedTextConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.NormalizedTextConfig</name><anchor>optimum.utils.NormalizedTextConfig</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/normalized_config.py#L87</source><parameters>[{"name": "config", "val": ": typing.Union[ForwardRef('PretrainedConfig'), typing.Dict]"}, {"name": "allow_new", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.NormalizedSeq2SeqConfig</name><anchor>optimum.utils.NormalizedSeq2SeqConfig</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/normalized_config.py#L99</source><parameters>[{"name": "config", "val": ": typing.Union[ForwardRef('PretrainedConfig'), typing.Dict]"}, {"name": "allow_new", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.NormalizedVisionConfig</name><anchor>optimum.utils.NormalizedVisionConfig</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/normalized_config.py#L106</source><parameters>[{"name": "config", "val": ": typing.Union[ForwardRef('PretrainedConfig'), typing.Dict]"}, {"name": "allow_new", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.NormalizedTextAndVisionConfig</name><anchor>optimum.utils.NormalizedTextAndVisionConfig</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/normalized_config.py#L125</source><parameters>[{"name": "config", "val": ": typing.Union[ForwardRef('PretrainedConfig'), typing.Dict]"}, {"name": "allow_new", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### Dummy Input Generators
https://huggingface.co/docs/optimum/main/utils/dummy_input_generators.md

# Dummy Input Generators

It is very common to have to generate dummy inputs to perform a task (tracing, exporting a model to some backend,
testing model outputs, etc). The goal of [DummyInputGenerator](/docs/optimum/main/en/utils/dummy_input_generators#optimum.utils.DummyInputGenerator) classes is to make this
generation easy and re-usable.


## Base class[[optimum.utils.DummyInputGenerator]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyInputGenerator</name><anchor>optimum.utils.DummyInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L93</source><parameters>[]</parameters></docstring>

Generates dummy inputs for the supported input names, in the requested framework.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>concat_inputs</name><anchor>optimum.utils.DummyInputGenerator.concat_inputs</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L292</source><parameters>[{"name": "inputs", "val": ""}, {"name": "dim", "val": ": int"}]</parameters><paramsdesc>- **inputs** --
  The list of tensors in a given framework to concatenate.
- **dim** (`int`) --
  The dimension along which to concatenate.</paramsdesc><paramgroups>0</paramgroups><retdesc>The tensor of the concatenation.</retdesc></docstring>

Concatenates inputs together.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>constant_tensor</name><anchor>optimum.utils.DummyInputGenerator.constant_tensor</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L245</source><parameters>[{"name": "shape", "val": ": typing.List[int]"}, {"name": "value", "val": ": typing.Union[int, float] = 1"}, {"name": "dtype", "val": ": typing.Optional[typing.Any] = None"}, {"name": "framework", "val": ": str = 'pt'"}]</parameters><paramsdesc>- **shape** (`List[int]`) --
  The shape of the constant tensor.
- **value** (`Union[int, float]`, defaults to 1) --
  The value to fill the constant tensor with.
- **dtype** (`Optional[Any]`, defaults to `None`) --
  The dtype of the constant tensor.
- **framework** (`str`, defaults to `"pt"`) --
  The requested framework.</paramsdesc><paramgroups>0</paramgroups><retdesc>A constant tensor in the requested framework.</retdesc></docstring>

Generates a constant tensor.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate</name><anchor>optimum.utils.DummyInputGenerator.generate</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L114</source><parameters>[{"name": "input_name", "val": ": str"}, {"name": "framework", "val": ": str = 'pt'"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}]</parameters><paramsdesc>- **input_name** (`str`) --
  The name of the input to generate.
- **framework** (`str`, defaults to `"pt"`) --
  The requested framework.
- **int_dtype** (`str`, defaults to `"int64"`) --
  The dtypes of generated integer tensors.
- **float_dtype** (`str`, defaults to `"fp32"`) --
  The dtypes of generated float tensors.</paramsdesc><paramgroups>0</paramgroups><retdesc>A tensor in the requested framework of the input.</retdesc></docstring>

Generates the dummy input matching `input_name` for the requested framework.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pad_input_on_dim</name><anchor>optimum.utils.DummyInputGenerator.pad_input_on_dim</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L317</source><parameters>[{"name": "input_", "val": ""}, {"name": "dim", "val": ": int"}, {"name": "desired_length", "val": ": typing.Optional[int] = None"}, {"name": "padding_length", "val": ": typing.Optional[int] = None"}, {"name": "value", "val": ": typing.Union[int, float] = 1"}, {"name": "dtype", "val": ": typing.Optional[typing.Any] = None"}]</parameters><paramsdesc>- **input_** --
  The tensor to pad.
- **dim** (`int`) --
  The dimension along which to pad.
- **desired_length** (`Optional[int]`, defaults to `None`) --
  The desired length along the dimension after padding.
- **padding_length** (`Optional[int]`, defaults to `None`) --
  The length to pad along the dimension.
- **value** (`Union[int, float]`, defaults to 1) --
  The value to use for padding.
- **dtype** (`Optional[Any]`, defaults to `None`) --
  The dtype of the padding.</paramsdesc><paramgroups>0</paramgroups><retdesc>The padded tensor.</retdesc></docstring>

Pads an input either to the desired length, or by a padding length.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>random_float_tensor</name><anchor>optimum.utils.DummyInputGenerator.random_float_tensor</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L213</source><parameters>[{"name": "shape", "val": ": typing.List[int]"}, {"name": "min_value", "val": ": float = 0"}, {"name": "max_value", "val": ": float = 1"}, {"name": "framework", "val": ": str = 'pt'"}, {"name": "dtype", "val": ": str = 'fp32'"}]</parameters><paramsdesc>- **shape** (`List[int]`) --
  The shape of the random tensor.
- **min_value** (`float`, defaults to 0) --
  The minimum value allowed.
- **max_value** (`float`, defaults to 1) --
  The maximum value allowed.
- **framework** (`str`, defaults to `"pt"`) --
  The requested framework.
- **dtype** (`str`, defaults to `"fp32"`) --
  The dtype of the generated float tensor. Could be "fp32", "fp16", "bf16".</paramsdesc><paramgroups>0</paramgroups><retdesc>A random tensor in the requested framework.</retdesc></docstring>

Generates a tensor of random floats in the [min_value, max_value) range.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>random_int_tensor</name><anchor>optimum.utils.DummyInputGenerator.random_int_tensor</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L134</source><parameters>[{"name": "shape", "val": ": typing.List[int]"}, {"name": "max_value", "val": ": int"}, {"name": "min_value", "val": ": int = 0"}, {"name": "framework", "val": ": str = 'pt'"}, {"name": "dtype", "val": ": str = 'int64'"}]</parameters><paramsdesc>- **shape** (`List[int]`) --
  The shape of the random tensor.
- **max_value** (`int`) --
  The maximum value allowed.
- **min_value** (`int`, defaults to 0) --
  The minimum value allowed.
- **framework** (`str`, defaults to `"pt"`) --
  The requested framework.
- **dtype** (`str`, defaults to `"int64"`) --
  The dtype of the generated integer tensor. Could be "int64", "int32", "int8".</paramsdesc><paramgroups>0</paramgroups><retdesc>A random tensor in the requested framework.</retdesc></docstring>

Generates a tensor of random integers in the [min_value, max_value) range.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>random_mask_tensor</name><anchor>optimum.utils.DummyInputGenerator.random_mask_tensor</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L166</source><parameters>[{"name": "shape", "val": ": typing.List[int]"}, {"name": "padding_side", "val": ": str = 'right'"}, {"name": "framework", "val": ": str = 'pt'"}, {"name": "dtype", "val": ": str = 'int64'"}]</parameters><paramsdesc>- **shape** (`List[int]`) --
  The shape of the random tensor.
- **padding_side** (`str`, defaults to "right") --
  The side on which the padding is applied.
- **framework** (`str`, defaults to `"pt"`) --
  The requested framework.
- **dtype** (`str`, defaults to `"int64"`) --
  The dtype of the generated integer tensor. Could be "int64", "int32", "int8".</paramsdesc><paramgroups>0</paramgroups><retdesc>A random mask tensor either left padded or right padded in the requested framework.</retdesc></docstring>

Generates a mask tensor either right or left padded.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>supports_input</name><anchor>optimum.utils.DummyInputGenerator.supports_input</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L100</source><parameters>[{"name": "input_name", "val": ": str"}]</parameters><paramsdesc>- **input_name** (`str`) --
  The name of the input to generate.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>A boolean specifying whether the input is supported.</retdesc></docstring>

Checks whether the `DummyInputGenerator` supports the generation of the requested input.








</div></div>

## Existing dummy input generators[[optimum.utils.DummyTextInputGenerator]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyTextInputGenerator</name><anchor>optimum.utils.DummyTextInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L363</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedTextConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "sequence_length", "val": ": int = 16"}, {"name": "num_choices", "val": ": int = 4"}, {"name": "random_batch_size_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_sequence_length_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_num_choices_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "padding_side", "val": ": str = 'right'"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy encoder text inputs.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyDecoderTextInputGenerator</name><anchor>optimum.utils.DummyDecoderTextInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L519</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedTextConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "sequence_length", "val": ": int = 16"}, {"name": "num_choices", "val": ": int = 4"}, {"name": "random_batch_size_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_sequence_length_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_num_choices_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "padding_side", "val": ": str = 'right'"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy decoder text inputs.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyPastKeyValuesGenerator</name><anchor>optimum.utils.DummyPastKeyValuesGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L620</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedTextConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "sequence_length", "val": ": int = 16"}, {"name": "random_batch_size_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_sequence_length_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy past_key_values inputs.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummySeq2SeqPastKeyValuesGenerator</name><anchor>optimum.utils.DummySeq2SeqPastKeyValuesGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L667</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": typing.Union[optimum.utils.normalized_config.NormalizedSeq2SeqConfig, optimum.utils.normalized_config.NormalizedEncoderDecoderConfig]"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "sequence_length", "val": ": int = 16"}, {"name": "encoder_sequence_length", "val": ": typing.Optional[int] = None"}, {"name": "random_batch_size_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_sequence_length_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy past_key_values inputs for seq2seq architectures.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyBboxInputGenerator</name><anchor>optimum.utils.DummyBboxInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L755</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "sequence_length", "val": ": int = 16"}, {"name": "random_batch_size_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "random_sequence_length_range", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy bbox inputs.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyVisionInputGenerator</name><anchor>optimum.utils.DummyVisionInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L795</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedVisionConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "num_channels", "val": ": int = 3"}, {"name": "width", "val": ": int = 64"}, {"name": "height", "val": ": int = 64"}, {"name": "visual_seq_length", "val": ": int = 16"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Generates dummy vision inputs.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.utils.DummyAudioInputGenerator</name><anchor>optimum.utils.DummyAudioInputGenerator</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L883</source><parameters>[{"name": "task", "val": ": str"}, {"name": "normalized_config", "val": ": NormalizedConfig"}, {"name": "batch_size", "val": ": int = 2"}, {"name": "feature_size", "val": ": int = 80"}, {"name": "nb_max_frames", "val": ": int = 3000"}, {"name": "audio_sequence_length", "val": ": int = 16000"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

### Overview
https://huggingface.co/docs/optimum/main/torch_fx/overview.md

# Overview

🤗 Optimum provides an integration with Torch FX, a library for PyTorch that allows developers to implement custom transformations of their models that can be optimized for performance.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/optimization"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum to solve real-world problems.</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/symbolic_tracer"
      ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
      <p class="text-gray-700">High-level explanations for building a better understanding about important topics such as quantization and graph optimization.</p>
   </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/optimization"
      ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
      <p class="text-gray-700">Technical descriptions of how the Torch FX classes and methods of 🤗 Optimum work.</p>
    </a>
  </div>
</div>

### Optimization
https://huggingface.co/docs/optimum/main/torch_fx/usage_guides/optimization.md

# Optimization

The `optimum.fx.optimization` module provides a set of torch.fx graph transformations, along with classes and functions to write your own transformations and compose them.

## The transformation guide

In 🤗 Optimum, there are two kinds of transformations: reversible and non-reversible transformations.


### Write a non-reversible transformation

The most basic case of transformations is non-reversible transformations. Those transformations cannot be reversed, meaning that after applying them to a graph module, there is no way to get the original model back. To implement such transformations in 🤗 Optimum, it is very easy: you just need to subclass [Transformation](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation) and implement the [transform()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation.transform) method.

For instance, the following transformation changes all the multiplications to additions:

```python
>>> import operator
>>> from optimum.fx.optimization import Transformation

>>> class ChangeMulToAdd(Transformation):
...     def transform(self, graph_module):
...         for node in graph_module.graph.nodes:
...             if node.op == "call_function" and node.target == operator.mul:
...                 node.target = operator.add
...         return graph_module
```

After implementing it, your transformation can be used as a regular function:

```python
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace

>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "token_type_ids"],
... )

>>> transformation = ChangeMulToAdd()
>>> transformed_model = transformation(traced)
```

### Write a reversible transformation

A reversible transformation implements both the transformation and its reverse, allowing to retrieve the original model from the transformed one. To implement such transformation, you need to subclass [ReversibleTransformation](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.ReversibleTransformation) and implement the [transform()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation.transform) and [reverse()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.ReversibleTransformation.reverse) methods.

For instance, the following transformation is reversible:

```python
>>> import operator
>>> from optimum.fx.optimization import ReversibleTransformation

>>> class MulToMulTimesTwo(ReversibleTransformation):
...     def transform(self, graph_module):
...         for node in graph_module.graph.nodes:
...             if node.op == "call_function" and node.target == operator.mul:
...                 x, y = node.args
...                 node.args = (2 * x, y)
...         return graph_module
...
...     def reverse(self, graph_module):
...         for node in graph_module.graph.nodes:
...             if node.op == "call_function" and node.target == operator.mul:
...                 x, y = node.args
...                 node.args = (x / 2, y)
...         return graph_module
```

### Composing transformations together

As applying multiple transformations in chain is needed more often that not, [compose()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.compose) is provided. It is an utility function that allows you to create a transformation by chaining multiple other transformations.

```python
>>> from optimum.fx.optimization import compose
>>> composition = compose(MulToMulTimesTwo(), ChangeMulToAdd())
```

### Optimization
https://huggingface.co/docs/optimum/main/torch_fx/package_reference/optimization.md

# Optimization

## Transformation[[optimum.fx.optimization.Transformation]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.Transformation</name><anchor>optimum.fx.optimization.Transformation</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L105</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

A torch.fx graph transformation.

It  must implement the [transform()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation.transform) method, and be used as a
callable.






<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.fx.optimization.Transformation.__call__</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L128</source><parameters>[{"name": "graph_module", "val": ": GraphModule"}, {"name": "lint_and_recompile", "val": ": bool = True"}]</parameters><paramsdesc>- **graph_module** (`torch.fx.GraphModule`) --
  The module to transform.
- **lint_and_recompile** (`bool`, defaults to `True`) --
  Whether the transformed module should be linted and recompiled.
  This can be set to `False` when chaining transformations together to perform this operation only once.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.fx.GraphModule`</rettype><retdesc>The transformed module.</retdesc></docstring>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_transformed_nodes</name><anchor>optimum.fx.optimization.Transformation.get_transformed_nodes</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L181</source><parameters>[{"name": "graph_module", "val": ": GraphModule"}]</parameters><paramsdesc>- **graph_module** (`torch.fx.GraphModule`) --
  The graph_module to get the nodes from.</paramsdesc><paramgroups>0</paramgroups><rettype>`List[torch.fx.Node]`</rettype><retdesc>Gives the list of nodes that were transformed by the transformation.</retdesc></docstring>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>mark_as_transformed</name><anchor>optimum.fx.optimization.Transformation.mark_as_transformed</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L157</source><parameters>[{"name": "node", "val": ": Node"}]</parameters><paramsdesc>- **node** (`torch.fx.Node`) --
  The node to mark as transformed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Marks a node as transformed by this transformation.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>transform</name><anchor>optimum.fx.optimization.Transformation.transform</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L115</source><parameters>[{"name": "graph_module", "val": ": GraphModule"}]</parameters><paramsdesc>- **graph_module** (`torch.fx.GraphModule`) --
  The module to transform.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.fx.GraphModule`</rettype><retdesc>The transformed module.</retdesc></docstring>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>transformed</name><anchor>optimum.fx.optimization.Transformation.transformed</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L169</source><parameters>[{"name": "node", "val": ": Node"}]</parameters><paramsdesc>- **node** (`torch.fx.Node`) --
  The node to check.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>Specifies whether the node was transformed by this transformation or not.</retdesc></docstring>








</div></div>

## Reversible transformation[[optimum.fx.optimization.ReversibleTransformation]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.ReversibleTransformation</name><anchor>optimum.fx.optimization.ReversibleTransformation</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L196</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

A torch.fx graph transformation that is reversible.

It must implement the [transform()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation.transform) and
[reverse()](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.ReversibleTransformation.reverse) methods, and be used as a callable.






<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.fx.optimization.ReversibleTransformation.__call__</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L217</source><parameters>[{"name": "graph_module", "val": ": GraphModule"}, {"name": "lint_and_recompile", "val": ": bool = True"}, {"name": "reverse", "val": ": bool = False"}]</parameters><paramsdesc>- **graph_module** (`torch.fx.GraphModule`) --
  The module to transform.
- **lint_and_recompile** (`bool`, defaults to `True`) --
  Whether the transformed module should be linted and recompiled.
  This can be set to `False` when chaining transformations together to perform this operation only once.
- **reverse** (`bool`, defaults to `False`) --
  If `True`, the reverse transformation is performed.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.fx.GraphModule`</rettype><retdesc>The transformed module.</retdesc></docstring>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>mark_as_restored</name><anchor>optimum.fx.optimization.ReversibleTransformation.mark_as_restored</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L242</source><parameters>[{"name": "node", "val": ": Node"}]</parameters><paramsdesc>- **node** (`torch.fx.Node`) --
  The node to mark as restored.</paramsdesc><paramgroups>0</paramgroups></docstring>

Marks a node as restored back to its original state.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reverse</name><anchor>optimum.fx.optimization.ReversibleTransformation.reverse</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L204</source><parameters>[{"name": "graph_module", "val": ": GraphModule"}]</parameters><paramsdesc>- **graph_module** (`torch.fx.GraphModule`) --
  The module to transform.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.fx.GraphModule`</rettype><retdesc>The reverse transformed module.</retdesc></docstring>








</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.fx.optimization.compose</name><anchor>optimum.fx.optimization.compose</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L741</source><parameters>[{"name": "*args", "val": ": Transformation"}, {"name": "inplace", "val": ": bool = True"}]</parameters><paramsdesc>- **args** ([Transformation](/docs/optimum/main/en/torch_fx/package_reference/optimization#optimum.fx.optimization.Transformation)) --
  The transformations to compose together.
- **inplace** (`bool`, defaults to `True`) --
  Whether the resulting transformation should be inplace, or create a new graph module.</paramsdesc><paramgroups>0</paramgroups><retdesc>The composition transformation object.</retdesc></docstring>

Composes a list of transformations together.





<ExampleCodeBlock anchor="optimum.fx.optimization.compose.example">

Example:

```python
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse, MergeLinears, compose

>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> composition = compose(ChangeTrueDivToMulByInverse(), MergeLinears())
>>> transformed_model = composition(traced)
```

</ExampleCodeBlock>


</div>

### Transformations[[optimum.fx.optimization.MergeLinears]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.MergeLinears</name><anchor>optimum.fx.optimization.MergeLinears</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L257</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Transformation that merges linear layers that take the same input into one big linear layer.




<ExampleCodeBlock anchor="optimum.fx.optimization.MergeLinears.example">

Example:

```python
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import MergeLinears

>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = MergeLinears()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.FuseBiasInLinear</name><anchor>optimum.fx.optimization.FuseBiasInLinear</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L413</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Transformation that fuses the bias to the weight in torch.nn.Linear.




<ExampleCodeBlock anchor="optimum.fx.optimization.FuseBiasInLinear.example">

Example:

```python
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import FuseBiasInLinear

>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = FuseBiasInLinear()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.ChangeTrueDivToMulByInverse</name><anchor>optimum.fx.optimization.ChangeTrueDivToMulByInverse</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L467</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Transformation that changes truediv nodes to multiplication by the inverse nodes when the denominator is static.
For example, that is sometimes the case for the scaling factor in attention layers.




<ExampleCodeBlock anchor="optimum.fx.optimization.ChangeTrueDivToMulByInverse.example">

Example:

```python
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse

>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = ChangeTrueDivToMulByInverse()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.FuseBatchNorm2dInConv2d</name><anchor>optimum.fx.optimization.FuseBatchNorm2dInConv2d</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L498</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Transformation that fuses `nn.BatchNorm2d` following `nn.Conv2d` into a single `nn.Conv2d`.
The fusion will be done only if the convolution has the batch normalization as sole following node.

For example, fusion will not be done in the case
<ExampleCodeBlock anchor="optimum.fx.optimization.FuseBatchNorm2dInConv2d.example">

```
     Conv2d
     /   \
    /     \
ReLU   BatchNorm2d
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.fx.optimization.FuseBatchNorm2dInConv2d.example-2">

Example:
```python
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModelForImageClassification

>>> from optimum.fx.optimization import FuseBatchNorm2dInConv2d

>>> model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")
>>> model.eval()
>>> traced_model = symbolic_trace(
...     model,
...     input_names=["pixel_values"],
...     disable_check=True
... )

>>> transformation = FuseBatchNorm2dInConv2d()
>>> transformed_model = transformation(traced_model)
```

</ExampleCodeBlock>




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.fx.optimization.FuseBatchNorm1dInLinear</name><anchor>optimum.fx.optimization.FuseBatchNorm1dInLinear</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/fx/optimization/transformations.py#L581</source><parameters>[]</parameters><paramsdesc>- **preserves_computation** (`bool`, defaults to `False`) --
  Whether the transformation preserves the graph computation or not. If `True`, the original and the
  transformed graph should produce the same outputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Transformation that fuses `nn.BatchNorm1d` following or preceding `nn.Linear` into a single `nn.Linear`.
The fusion will be done only if the linear layer has the batch normalization as sole following node, or the batch normalization
has the linear layer as sole following node.

For example, fusion will not be done in the case
<ExampleCodeBlock anchor="optimum.fx.optimization.FuseBatchNorm1dInLinear.example">

```
     Linear
     /   \
    /     \
ReLU   BatchNorm1d
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.fx.optimization.FuseBatchNorm1dInLinear.example-2">

Example:
```python
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModel

>>> from optimum.fx.optimization import FuseBatchNorm1dInLinear

>>> model = AutoModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model.eval()
>>> traced_model = symbolic_trace(
...     model,
...     input_names=["input_ids", "attention_mask", "pixel_values"],
...     disable_check=True
... )

>>> transformation = FuseBatchNorm1dInLinear()
>>> transformed_model = transformation(traced_model)
```

</ExampleCodeBlock>




</div>

### Symbolic tracer
https://huggingface.co/docs/optimum/main/torch_fx/concept_guides/symbolic_tracer.md

# Symbolic tracer

In Torch FX, the symbolic tracer feeds dummy values through the code to record the underlying operations.

### Overview
https://huggingface.co/docs/optimum/main/exporters/overview.md

# Overview

🤗 Optimum enables exporting models from PyTorch to different formats through its `exporters` module. For now, three exporting format are supported: ONNX (optimum-onnx), OpenVINO (optimum-intel), Neuron (optimum-neuron).

### The Tasks Manager
https://huggingface.co/docs/optimum/main/exporters/task_manager.md

# The Tasks Manager

Exporting a model from one framework to some format (also called backend here) involves specifying inputs and outputs information that the export function needs. The way `optimum.exporters` is structured for each backend is as follows:
- Configuration classes containing the information for each model to perform the export.
- Exporting functions using the proper configuration for the model to export.

The role of the [TasksManager](/docs/optimum/main/en/exporters/task_manager#optimum.exporters.tasks.TasksManager) is to be the main entry-point to load a model given a name and a task, and to get the proper configuration for a given (architecture, backend) couple. 
That way, there is a centralized place to register the `task -> model class` and `(architecture, backend) -> configuration` mappings. This allows the export functions to use this, and to rely on the various checks it provides.

## Task names

The tasks supported might depend on the backend, but here are the mappings between a task name and the auto class for PyTorch.

<Tip>

It is possible to know which tasks are supported for a model for a given backend, by doing:

```python
>>> from optimum.exporters.tasks import TasksManager

>>> model_type = "distilbert"
>>> # For instance, for the ONNX export.
>>> backend = "onnx"
>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type(model_type, backend).keys())

>>> print(distilbert_tasks)
['default', 'fill-mask', 'text-classification', 'multiple-choice', 'token-classification', 'question-answering']
```

</Tip>

### PyTorch

#### Transformers

| Task                             | Auto Class                                                    |
|----------------------------------|---------------------------------------------------------------|
| `audio-classification`           | `AutoModelForAudioClassification`                             |
| `audio-frame-classification`     | `AutoModelForAudioFrameClassification`                        |
| `audio-xvector`                  | `AutoModelForAudioXVector`                                    |
| `automatic-speech-recognition`   | `AutoModelForSpeechSeq2Seq`, `AutoModelForCTC`                |
| `depth-estimation`               | `AutoModelForDepthEstimation`                                 |
| `feature-extraction`             | `AutoModel`                                                   |
| `fill-mask`                      | `AutoModelForMaskedLM`                                        |
| `image-classification`           | `AutoModelForImageClassification`                             |
| `image-to-image`                 | `AutoModelForImageToImage`                                    |
| `image-to-text`                  | `AutoModelForVision2Seq`, `AutoModel`                         |
| `image-text-to-text`             | `AutoModelForImageTextToText`                                 |
| `mask-generation`                | `AutoModel`                                                   |
| `masked-im`                      | `AutoModelForMaskedImageModeling`                             |
| `multiple-choice`                | `AutoModelForMultipleChoice`                                  |
| `object-detection`               | `AutoModelForObjectDetection`                                 |
| `question-answering`             | `AutoModelForQuestionAnswering`                               |
| `reinforcement-learning`         | `AutoModel`                                                   |
| `semantic-segmentation`          | `AutoModelForSemanticSegmentation`                            |
| `text-to-audio`                  | `AutoModelForTextToSpectrogram`, `AutoModelForTextToWaveform` |
| `text-generation`                | `AutoModelForCausalLM`                                        |
| `text2text-generation`           | `AutoModelForSeq2SeqLM`                                       |
| `text-classification`            | `AutoModelForSequenceClassification`                          |
| `token-classification`           | `AutoModelForTokenClassification`                             |
| `visual-question-answering`      | `AutoModelForVisualQuestionAnswering`                         |
| `zero-shot-image-classification` | `AutoModelForZeroShotImageClassification`                     |
| `zero-shot-object-detection`     | `AutoModelForZeroShotObjectDetection`                         |

#### Diffusers

| Task             | Auto Class                   |
|------------------|------------------------------|
| `text-to-image`  | `AutoPipelineForText2Image`  |
| `image-to-image` | `AutoPipelineForImage2Image` |
| `inpainting`     | `AutoPipelineForInpainting`  |

#### Sentence Transformers

| Task                  | Auto Class            |
|-----------------------|-----------------------|
| `feature-extraction`  | `SentenceTransformer` |
| `sentence-similarity` | `SentenceTransformer` |

## Reference[[optimum.exporters.tasks.TasksManager]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.tasks.TasksManager</name><anchor>optimum.exporters.tasks.TasksManager</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L118</source><parameters>[]</parameters></docstring>

Handles the `task name -> model class` and `architecture -> configuration` mappings.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_register</name><anchor>optimum.exporters.tasks.TasksManager.create_register</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L287</source><parameters>[{"name": "backend", "val": ": str"}, {"name": "overwrite_existing", "val": ": bool = False"}]</parameters><paramsdesc>- **backend** (`str`) --
  The name of the backend that the register function will handle.
- **overwrite_existing** (`bool`, defaults to `False`) --
  Whether or not the register function is allowed to overwrite an already existing config.</paramsdesc><paramgroups>0</paramgroups><rettype>`Callable[[str, Tuple[str, ...]], Callable[[Type], Type]]`</rettype><retdesc>A decorator taking the model type and a the
supported tasks.</retdesc></docstring>

Creates a register function for the specified backend.







<ExampleCodeBlock anchor="optimum.exporters.tasks.TasksManager.create_register.example">

Example:
```python
>>> register_for_new_backend = create_register("new-backend")

>>> @register_for_new_backend("bert", "text-classification", "token-classification")
>>> class BertNewBackendConfig(NewBackendConfig):
>>>     pass
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>determine_framework</name><anchor>optimum.exporters.tasks.TasksManager.determine_framework</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L579</source><parameters>[{"name": "model_name_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}]</parameters><paramsdesc>- **model_name_or_path** (`Union[str, Path]`) --
  Can be either the model id of a model repo on the Hugging Face Hub, or a path to a local directory
  containing a model.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  In case the model files are located inside a subfolder of the model directory / repo on the Hugging
  Face Hub, you can specify the subfolder name here.
- **revision** (`Optional[str]`,  defaults to `None`) --
  Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
- **cache_dir** (`Optional[str]`, *optional*) --
  Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The framework to use for the export.</retdesc></docstring>

Determines the framework to use for the export.

The priority is in the following order:
1. User input via `framework`.
2. If local checkpoint is provided, use the same framework as the checkpoint.
3. If model repo, try to infer the framework from the cache if available, else from the Hub.
4. If could not infer, use available framework in environment, with priority given to PyTorch.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_all_tasks</name><anchor>optimum.exporters.tasks.TasksManager.get_all_tasks</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L1059</source><parameters>[]</parameters><rettype>`List`</rettype><retdesc>all the possible tasks.</retdesc></docstring>

Retrieves all the possible tasks.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_exporter_config_constructor</name><anchor>optimum.exporters.tasks.TasksManager.get_exporter_config_constructor</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L1232</source><parameters>[{"name": "exporter", "val": ": str"}, {"name": "model", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "model_type", "val": ": typing.Optional[str] = None"}, {"name": "model_name", "val": ": typing.Optional[str] = None"}, {"name": "exporter_config_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **exporter** (`str`) --
  The exporter to use.
- **model** (`Optional[PreTrainedModel]`, defaults to `None`) --
  The instance of the model.
- **task** (`str`, defaults to `"feature-extraction"`) --
  The task to retrieve the config for.
- **model_type** (`Optional[str]`, defaults to `None`) --
  The model type to retrieve the config for.
- **model_name** (`Optional[str]`, defaults to `None`) --
  The name attribute of the model object, only used for the exception message.
- **exporter_config_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Arguments that will be passed to the exporter config class when building the config constructor.
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers".</paramsdesc><paramgroups>0</paramgroups><rettype>`ExportConfigConstructor`</rettype><retdesc>The `ExporterConfig` constructor for the requested backend.</retdesc></docstring>

Gets the `ExportConfigConstructor` for a model (or alternatively for a model type) and task combination.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_class_for_task</name><anchor>optimum.exporters.tasks.TasksManager.get_model_class_for_task</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L445</source><parameters>[{"name": "task", "val": ": str"}, {"name": "framework", "val": ": str = 'pt'"}, {"name": "model_type", "val": ": typing.Optional[str] = None"}, {"name": "model_class_name", "val": ": typing.Optional[str] = None"}, {"name": "library", "val": ": str = 'transformers'"}]</parameters><paramsdesc>- **task** (`str`) --
  The task required.
- **framework** (`str`, defaults to `"pt"`) --
  The framework to use for the export.
- **model_type** (`Optional[str]`, defaults to `None`) --
  The model type to retrieve the model class for. Some architectures need a custom class to be loaded,
  and can not be loaded from auto class.
- **model_class_name** (`Optional[str]`, defaults to `None`) --
  A model class name, allowing to override the default class that would be detected for the task. This
  parameter is useful for example for "automatic-speech-recognition", that may map to
  AutoModelForSpeechSeq2Seq or to AutoModelForCTC.
- **library** (`str`, defaults to `transformers`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers".</paramsdesc><paramgroups>0</paramgroups><retdesc>The AutoModel class corresponding to the task.</retdesc></docstring>

Attempts to retrieve an AutoModel class from a task name.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_from_task</name><anchor>optimum.exporters.tasks.TasksManager.get_model_from_task</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L1079</source><parameters>[{"name": "task", "val": ": str"}, {"name": "model_name_or_path", "val": ": typing.Union[str, pathlib.Path]"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "framework", "val": ": typing.Optional[str] = None"}, {"name": "torch_dtype", "val": ": typing.Optional[ForwardRef('torch.dtype')] = None"}, {"name": "device", "val": ": typing.Union[ForwardRef('torch.device'), str, NoneType] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **task** (`str`) --
  The task required.
- **model_name_or_path** (`Union[str, Path]`) --
  Can be either the model id of a model repo on the Hugging Face Hub, or a path to a local directory
  containing a model.
- **subfolder** (`str`, defaults to `""`) --
  In case the model files are located inside a subfolder of the model directory / repo on the Hugging
  Face Hub, you can specify the subfolder name here.
- **revision** (`Optional[str]`, *optional*) --
  Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
- **cache_dir** (`Optional[str]`, *optional*) --
  Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).
- **framework** (`Optional[str]`, *optional*) --
  The framework to use for the export. See `TasksManager.determine_framework` for the priority should
  none be provided.
- **torch_dtype** (`Optional[torch.dtype]`, defaults to `None`) --
  Data type to load the model on. PyTorch-only argument.
- **device** (`Optional[torch.device]`, defaults to `None`) --
  Device to initialize the model on. PyTorch-only argument. For PyTorch, defaults to "cpu".
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers". See `TasksManager.infer_library_from_model` for the priority should
  none be provided.
- **model_kwargs** (`Dict[str, Any]`, *optional*) --
  Keyword arguments to pass to the model `.from_pretrained()` method.
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers". See `TasksManager.infer_library_from_model` for the priority should
  none be provided.</paramsdesc><paramgroups>0</paramgroups><retdesc>The instance of the model.</retdesc></docstring>

Retrieves a model from its name and the task to be enabled.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_supported_model_type_for_task</name><anchor>optimum.exporters.tasks.TasksManager.get_supported_model_type_for_task</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L403</source><parameters>[{"name": "task", "val": ": str"}, {"name": "exporter", "val": ": str"}]</parameters></docstring>

Returns the list of supported architectures by the exporter for a given task. Transformers-specific.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_supported_tasks_for_model_type</name><anchor>optimum.exporters.tasks.TasksManager.get_supported_tasks_for_model_type</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L342</source><parameters>[{"name": "model_type", "val": ": str"}, {"name": "exporter", "val": ": str"}, {"name": "model_name", "val": ": typing.Optional[str] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model_type** (`str`) --
  The model type to retrieve the supported tasks for.
- **exporter** (`str`) --
  The name of the exporter.
- **model_name** (`Optional[str]`, defaults to `None`) --
  The name attribute of the model object, only used for the exception message.
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers".</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, ExportConfigConstructor]`</rettype><retdesc>The mapping between the supported tasks and the backend config
constructors for the specified model type.</retdesc></docstring>

Retrieves the `task -> exporter backend config constructors` map from the model type.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>infer_library_from_model</name><anchor>optimum.exporters.tasks.TasksManager.infer_library_from_model</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L962</source><parameters>[{"name": "model", "val": ": typing.Union[str, ForwardRef('PreTrainedModel'), ForwardRef('DiffusionPipeline'), typing.Type]"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}]</parameters><paramsdesc>- **model** (`Union[str, PreTrainedModel, DiffusionPipeline, Type]`) --
  The model to infer the task from. This can either be the name of a repo on the HuggingFace Hub, an
  instance of a model, or a model class.
- **subfolder** (`str`, defaults to `""`) --
  In case the model files are located inside a subfolder of the model directory / repo on the Hugging
  Face Hub, you can specify the subfolder name here.
- **revision** (`Optional[str]`, *optional*, defaults to `None`) --
  Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
- **cache_dir** (`Optional[str]`, *optional*) --
  Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The library name automatically detected from the model repo, model instance, or model class.</retdesc></docstring>

Infers the library from the model repo, model instance, or model class.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>infer_task_from_model</name><anchor>optimum.exporters.tasks.TasksManager.infer_task_from_model</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L803</source><parameters>[{"name": "model", "val": ": typing.Union[str, ForwardRef('PreTrainedModel'), ForwardRef('DiffusionPipeline'), typing.Type]"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`Union[str, PreTrainedModel, DiffusionPipeline, Type]`) --
  The model to infer the task from. This can either be the name of a repo on the HuggingFace Hub, an
  instance of a model, or a model class.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  In case the model files are located inside a subfolder of the model directory / repo on the Hugging
  Face Hub, you can specify the subfolder name here.
- **revision** (`Optional[str]`,  defaults to `None`) --
  Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
- **cache_dir** (`Optional[str]`, *optional*) --
  Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library name of the model. Can be any of "transformers", "timm", "diffusers", "sentence_transformers". See `TasksManager.infer_library_from_model` for the priority should
  none be provided.</paramsdesc><paramgroups>0</paramgroups><rettype>`str`</rettype><retdesc>The task name automatically detected from the HF hub repo, model instance, or model class.</retdesc></docstring>

Infers the task from the model repo, model instance, or model class.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>standardize_model_attributes</name><anchor>optimum.exporters.tasks.TasksManager.standardize_model_attributes</anchor><source>https://github.com/huggingface/optimum/blob/main/optimum/exporters/tasks.py#L1008</source><parameters>[{"name": "model", "val": ": typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('DiffusionPipeline')]"}, {"name": "library_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **model** (`Union[PreTrainedModel, DiffusionPipeline]`) --
  The instance of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Updates the model for export. This function is suitable to make required changes to the models from different
libraries to follow transformers style.




</div></div>

### Quantization
https://huggingface.co/docs/optimum/main/concept_guides/quantization.md

# Quantization

Quantization is a technique to reduce the computational and memory costs of running inference by representing the
weights and activations with low-precision data types like 8-bit integer (`int8`) instead of the usual 32-bit floating
point (`float32`).

Reducing the number of bits means the resulting model requires less memory storage, consumes less energy (in theory), and
operations like matrix multiplication can be performed much faster with integer arithmetic. It also allows to run models
on embedded devices, which sometimes only support integer data types.


## Theory

The basic idea behind quantization is quite easy: going from high-precision representation (usually the regular 32-bit
floating-point) for weights and activations to a lower precision data type. The most common lower precision data types
are:

- `float16`, accumulation data type `float16`
- `bfloat16`, accumulation data type `float32`
- `int16`, accumulation data type `int32`
- `int8`, accumulation data type `int32`

The accumulation data type specifies the type of the result of accumulating (adding, multiplying, etc) values of the
data type in question. For example, let's consider two `int8` values `A = 127`, `B = 127`, and let's define `C` as the
sum of `A` and `B`:

```
C = A + B
```

Here the result is much bigger than the biggest representable value in `int8`, which is `127`. Hence the need for a larger
precision data type to avoid a huge precision loss that would make the whole quantization process useless.

## Quantization

The two most common quantization cases are `float32 -> float16` and `float32 -> int8`.

### Quantization to `float16`

Performing quantization to go from `float32` to `float16` is quite straightforward since both data types follow the same
representation scheme. The questions to ask yourself when quantizing an operation to `float16` are:

- Does my operation have a `float16` implementation?
- Does my hardware support `float16`? For instance, Intel CPUs [have been supporting `float16` as a storage type, but
computation is done after converting to `float32`](https://scicomp.stackexchange.com/a/35193). Full support will come
in Cooper Lake and Sapphire Rapids.
- Is my operation sensitive to lower precision?
For instance the value of epsilon in `LayerNorm` is usually very small (~ `1e-12`), but the smallest representable value in
`float16` is ~ `6e-5`, this can cause `NaN` issues.  The same applies for big values.

### Quantization to `int8`

Performing quantization to go from `float32` to `int8` is more tricky. Only 256 values can be represented in `int8`,
while `float32` can represent a very wide range of values. The idea is to find the best way to project our range `[a, b]`
of `float32` values  to the `int8` space.

Let's consider a float `x` in `[a, b]`, then we can write the following quantization scheme, also called the *affine
quantization scheme*:

```
x = S * (x_q - Z)
```

where:

- `x_q` is the quantized `int8` value associated to `x`
- `S` and `Z` are the quantization parameters
  - `S` is the scale, and is a positive `float32`
  - `Z` is called the zero-point, it is the `int8` value corresponding to the value `0` in the `float32` realm. This is
  important to be able to represent exactly the value `0` because it is used everywhere throughout machine learning
  models.


The quantized value `x_q` of `x` in `[a, b]` can be computed as follows:

```
x_q = round(x/S + Z)
```

And `float32` values outside of the `[a, b]` range are clipped to the closest representable value, so for any
floating-point number `x`:

```
x_q = clip(round(x/S + Z), round(a/S + Z), round(b/S + Z))

```

<Tip>

Usually `round(a/S + Z)` corresponds to the smallest representable value in the considered data type, and `round(b/S + Z)`
to the biggest one. But this can vary, for instance when using a *symmetric quantization scheme* as you will see in the next
paragraph.

</Tip>

### Symmetric and affine quantization schemes

The equation above is called the *affine quantization scheme* because the mapping from `[a, b]` to `int8` is an affine one.

A common special case of this scheme is the *symmetric quantization scheme*, where we consider a symmetric range of float values `[-a, a]`.
In this case the integer space is usually `[-127, 127]`, meaning that the `-128` is opted out of the regular `[-128, 127]` signed `int8` range.
The reason being that having a symmetric range allows to have `Z = 0`. While one value out of the 256 representable
values is lost, it can provide a speedup since a lot of addition operations can be skipped.

**Note**: To learn how the quantization parameters `S` and `Z` are computed, you can read the
[Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](https://arxiv.org/abs/1712.05877)
paper, or [Lei Mao's blog post](https://leimao.github.io/article/Neural-Networks-Quantization) on the subject.


### Per-tensor and per-channel quantization

Depending on the accuracy / latency trade-off you are targeting you can play with the granularity of the quantization parameters:

- Quantization parameters can be computed on a *per-tensor* basis, meaning that one pair of `(S, Z)` will be used per
tensor.
- Quantization parameters can be computed on a *per-channel* basis, meaning that it is possible to store a pair of
`(S, Z)` per element along one of the dimensions of a tensor. For example for a tensor of shape `[N, C, H, W]`, having
*per-channel* quantization parameters for the second dimension would result in having `C` pairs of `(S, Z)`. While this
can give a better accuracy, it requires more memory.

### Calibration

The section above described how quantization from `float32` to `int8` works, but one question
remains: how is the `[a, b]` range of `float32` values determined? That is where calibration comes in to play.

Calibration is the step during quantization where the `float32` ranges are computed. For weights it is quite easy since
the actual range is known at *quantization-time*. But it is less clear for activations, and different approaches exist:

1. Post training **dynamic quantization**: the range for each activation is computed on the fly at *runtime*. While this
gives great results without too much work, it can be a bit slower than static quantization because of the overhead
introduced by computing the range each time.
It is also not an option on certain hardware.
2. Post training **static quantization**: the range for each activation is computed in advance at *quantization-time*,
typically by passing representative data through the model and recording the activation values. In practice, the steps are:
    1. Observers are put on activations to record their values.
    2. A certain number of forward passes on a calibration dataset is done (around `200` examples is enough).
    3. The ranges for each computation are computed according to some *calibration technique*.
3. **Quantization aware training**: the range for each activation is computed at *training-time*, following the same idea
than post training static quantization. But "fake quantize" operators are used instead of observers: they record
values just as observers do, but they also simulate the error induced by quantization to let the model adapt to it.


For both post training static quantization and quantization aware training, it is necessary to define calibration
techniques, the most common are:

- Min-max: the computed range is `[min observed value, max observed value]`, this works well with weights.
- Moving average min-max: the computed range is `[moving average min observed value, moving average max observed value]`,
this works well with activations.
- Histogram: records a histogram of values along with min and max values, then chooses according to some criterion:
  - Entropy: the range is computed as the one minimizing the error between the full-precision and the quantized data.
  - Mean Square Error: the range is computed as the one minimizing the mean square error between the full-precision and
  the quantized data.
  - Percentile: the range is computed using a given percentile value `p` on the observed values. The idea is to try to have
  `p%` of the observed values in the computed range. While this is possible when doing affine quantization, it is not always
  possible to exactly match that when doing symmetric quantization. You can check [how it is done in ONNX
  Runtime](https://github.com/microsoft/onnxruntime/blob/2cb12caf9317f1ded37f6db125cb03ba99320c40/onnxruntime/python/tools/quantization/calibrate.py#L698)
  for more details.


### Practical steps to follow to quantize a model to `int8`

To effectively quantize a model to `int8`, the steps to follow are:

1. Choose which operators to quantize. Good operators to quantize are the one dominating it terms of computation time,
for instance linear projections and matrix multiplications.
2. Try post-training dynamic quantization, if it is fast enough stop here, otherwise continue to step 3.
3. Try post-training static quantization which can be faster than dynamic quantization but often with a drop in terms of
accuracy. Apply observers to your models in places where you want to quantize.
4. Choose a calibration technique and perform it.
5. Convert the model to its quantized form: the observers are removed and the `float32` operators are converted to their `int8`
counterparts.
6. Evaluate the quantized model: is the accuracy good enough? If yes, stop here, otherwise start again at step 3 but
with quantization aware training this time.

## Supported tools to perform quantization in 🤗 Optimum

🤗 Optimum provides APIs to perform quantization using different tools for different targets:

- The `optimum.onnxruntime` package allows to
[quantize and run ONNX models](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization) using the
ONNX Runtime tool.
- The `optimum.intel` package enables to [quantize](https://huggingface.co/docs/optimum/intel/optimization_inc) 🤗 Transformers
models while respecting accuracy and latency constraints.
- The `optimum.fx` package provides wrappers around the
[PyTorch quantization functions](https://pytorch.org/docs/stable/quantization-support.html#torch-quantization-quantize-fx)
to allow graph-mode quantization of 🤗 Transformers models in PyTorch. This is a lower-level API compared to the two
mentioned above, giving more flexibility, but requiring more work on your end.
- The `optimum.gptq` package allows to [quantize and run LLM models](../llm_quantization/usage_guides/quantization) with GPTQ.

## Going further: How do machines represent numbers?

<Tip>

The section is not fundamental to understand the rest. It explains in brief how numbers are represented in computers.
Since quantization is about going from one representation to another, it can be useful to have some basics, but it is
definitely not mandatory.

</Tip>

The most fundamental unit of representation for computers is the bit. Everything in computers is represented as a
sequence of bits, including numbers. But the representation varies whether the numbers in question are integers or
real numbers.

#### Integer representation

Integers are usually represented with the following bit lengths: `8`, `16`, `32`, `64`. When representing integers, two cases
are considered:

1. Unsigned (positive) integers: they are simply represented as a sequence of bits. Each bit corresponds to a power
of two (from `0` to `n-1` where `n` is the bit-length), and the resulting number is the sum of those powers of two.

Example: `19` is represented as an unsigned int8 as `00010011` because :
```
19 = 0 x 2^7 + 0 x 2^6 + 0 x 2^5 + 1 x 2^4 + 0 x 2^3 + 0 x 2^2 + 1 x 2^1 + 1 x 2^0
```

2. Signed integers: it is less straightforward to represent signed integers, and multiple approaches exist, the most
common being the *two's complement*. For more information, you can check the
[Wikipedia page](https://en.wikipedia.org/wiki/Signed_number_representations) on the subject.

#### Real numbers representation

Real numbers are usually represented with the following bit lengths: `16`, `32`, `64`.
The two main ways of representing real numbers are:

1. Fixed-point: there are fixed number of digits reserved for representing the integer part and the fractional part.
2. Floating-point: the number of digits for representing the integer and the fractional parts can vary.

The floating-point representation can represent bigger ranges of values, and this is the one we will be focusing on
since it is the most commonly used. There are three components in the floating-point representation:

1. The sign bit: this is the bit specifying the sign of the number.
2. The exponent part
  - 5 bits in `float16`
  - 8 bits in `bfloat16`
  - 8 bits in `float32`
  - 11 bits in `float64`
2. The mantissa
  - 11 bits in `float16` (10 explicitly stored)
  - 8 bits in `bfloat16` (7 explicitly stored)
  - 24 bits in `float32` (23 explicitly stored)
  - 53 bits in `float64` (52 explicitly stored)

For more information on the bits allocation for each data type, check the nice illustration on the Wikipedia page about
the [bfloat16 floating-point format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format).

For a real number `x` we have:

```
x = sign x mantissa x (2^exponent)
```


## References

- The
[Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](https://arxiv.org/abs/1712.05877) paper
- The [Basics of Quantization in Machine Learning (ML) for Beginners](https://iq.opengenus.org/basics-of-quantization-in-ml/)
blog post
- The [How to accelerate and compress neural networks with quantization](https://tivadardanka.com/blog/neural-networks-quantization)
blog post
- The Wikipedia pages on integers representation [here](https://en.wikipedia.org/wiki/Integer_(computer_science)) and
[here](https://en.wikipedia.org/wiki/Signed_number_representations)
- The Wikipedia pages on
  - [bfloat16 floating-point format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format)
  - [Half-precision floating-point format](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)
  - [Single-precision floating-point format](https://en.wikipedia.org/wiki/Single-precision_floating-point_format)
  - [Double-precision floating-point format](https://en.wikipedia.org/wiki/Double-precision_floating-point_format)
