# Optimum

## Docs

- [Installation](https://huggingface.co/docs/optimum/v0.0.1/installation.md)
- [Quickstart](https://huggingface.co/docs/optimum/v0.0.1/quickstart.md)
- [🤗 Optimum ONNX](https://huggingface.co/docs/optimum/v0.0.1/index.md)
- [Overview](https://huggingface.co/docs/optimum/v0.0.1/onnx/overview.md)
- [Export a model to ONNX with optimum.exporters.onnx](https://huggingface.co/docs/optimum/v0.0.1/onnx/usage_guides/export_a_model.md)
- [Adding support for an unsupported architecture](https://huggingface.co/docs/optimum/v0.0.1/onnx/usage_guides/contribute.md)
- [Configuration classes for ONNX exports](https://huggingface.co/docs/optimum/v0.0.1/onnx/package_reference/configuration.md)
- [Export functions](https://huggingface.co/docs/optimum/v0.0.1/onnx/package_reference/export.md)
- [Overview](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/overview.md)
- [Quickstart](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/quickstart.md)
- [Accelerated inference on AMD GPUs supported by ROCm](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/amdgpu.md)
- [Quantization](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/quantization.md)
- [Optimum Inference with ONNX Runtime](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/models.md)
- [Inference pipelines with the ONNX Runtime accelerator](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/pipelines.md)
- [Accelerated inference on NVIDIA GPUs](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/gpu.md)
- [Optimization](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/optimization.md)
- [ONNX Runtime Diffusion Pipelines](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/modeling_diffusion.md)
- [Configuration](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/configuration.md)
- [ONNX Runtime Models](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/modeling.md)
- [Quantization](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/quantization.md)
- [ONNX Runtime Pipelines[[optimum.onnxruntime.pipeline]]](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/pipelines.md)
- [Optimization](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/optimization.md)
- [ONNX 🤝 ONNX Runtime](https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/concept_guides/onnx.md)

### Installation
https://huggingface.co/docs/optimum/v0.0.1/installation.md

# Installation


To install Optimum ONNX, you can do:

```bash
pip install "optimum-onnx[onnxruntime] @ git+https://github.com/huggingface/optimum-onnx.git"
```

Optimum ONNX is a fast-moving project, and you may want to install from source with the following command:


```bash
python -m pip install git+https://github.com/huggingface/optimum-onnx.git
```

### Quickstart
https://huggingface.co/docs/optimum/v0.0.1/quickstart.md

# Quickstart

## Export

You can export your models to ONNX easily:

```bash
optimum-cli export onnx --model meta-llama/Llama-3.2-1B --output_dir meta_llama3_2_1b_onnx
```


## Inference

To load a model and run inference, you can just replace your `AutoModelForCausalLM` class with the corresponding `ORTModelForCausalLM` class. You can also load a PyTorch checkpoint and convert it to ONNX on-the-fly when loading your model.

```diff
- from transformers import AutoModelForCausalLM
+ from optimum.onnxruntime import ORTModelForCausalLM
  from transformers import AutoTokenizer

  model_id = "meta-llama/Llama-3.2-1B"
  tokenizer = AutoTokenizer.from_pretrained(model_id)
- model = AutoModelForCausalLM.from_pretrained(model_id)
+ model = ORTModelForCausalLM.from_pretrained(model_id)
```

### 🤗 Optimum ONNX
https://huggingface.co/docs/optimum/v0.0.1/index.md

# 🤗 Optimum ONNX

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
    <a
      class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg"
      href="./onnx/overview"
    >
      <div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        ONNX export
      </div>
      <p class="text-gray-700">
        How to export your model to ONNX
      </p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./onnxruntime/overview">
      <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">
        ONNX Runtime
      </div>
      <p class="text-gray-700">
        Learn how to run and quantize your model with ONNX Runtime.
      </p>
    </a>
  </div>
</div>

### Overview
https://huggingface.co/docs/optimum/v0.0.1/onnx/overview.md

# Overview

🤗 Optimum handles the export of PyTorch models to ONNX in the `exporters.onnx` module. It provides classes, functions, and a command line interface to perform the export easily.

Supported architectures from [🤗 Transformers](https://huggingface.co/docs/transformers/index):

- Arcee
- AST
- Audio Spectrogram Transformer
- Albert
- Bart
- Beit
- Bert
- BlenderBot
- BlenderBotSmall
- Bloom
- Camembert
- ChineseCLIP
- CLIP
- CodeGen
- Cohere
- ConvBert
- ConvNext
- ConvNextV2
- D-FINE
- Data2VecAudio
- Data2VecText
- Data2VecVision
- Deberta
- Deberta-v2
- Decision Transformer
- DeepSeek-V3
- Deit
- Detr
- DINOv2
- DistilBert
- Donut-Swin
- Electra
- Encoder Decoder
- ESM
- Falcon
- Flaubert
- Gemma
- Gemma 2
- GLM
- GPT-2
- GPT-BigCode
- GPT-J
- GPT-Neo
- GPT-NeoX
- OPT
- Granite
- GroupVit
- Helium
- Hiera
- Hubert
- IBert
- InternLM2
- LayoutLM
- LayoutLM-v3
- Lilt
- Levit
- LongT5
- Llama
- M2-M100
- Marian
- MarkupLM
- MaskFormer
- MBart
- MetaClip2
- MGP-STR
- Mistral
- MobileBert
- MobileVit
- MobileNet v1
- MobileNet v2
- ModernBert
- MPNet
- MT5
- Musicgen (text-conditional only)
- Nemotron
- Nystromformer
- OLMo
- OLMo2
- OWL-ViT
- PatchTST
- PatchTSMixer
- Pegasus
- Perceiver
- Phi
- Phi3
- Pix2Struct
- PoolFormer
- PVT
- Qwen2(Qwen1.5)
- Qwen3
- Qwen3-MoE
- RegNet
- RemBERT
- ResNet
- Roberta
- Roformer
- RT-DETR
- RT-DETRv2
- SAM
- Segformer
- SEW
- SEW-D
- Speech2Text
- SigLIP
- SmolLM3
- SpeechT5
- Splinter
- SqueezeBert
- StableLM
- Swin
- SwinV2
- T5
- Table Transformer
- TROCR
- UniSpeech
- UniSpeech SAT
- Vision Encoder Decoder
- VisualBert
- Vit
- VitMAE
- VitMSN
- Wav2Vec2
- Wav2Vec2 Conformer
- WavLM
- Whisper
- XLM
- XLM-Roberta
- Yolos

Supported architectures from [🤗 Diffusers](https://huggingface.co/docs/diffusers/index):
- Stable Diffusion

Supported architectures from [🤗 Timm](https://huggingface.co/docs/timm/index):
- Adversarial Inception v3
- AdvProp (EfficientNet)
- Big Transfer (BiT)
- CSP-DarkNet
- CSP-ResNet
- CSP-ResNeXt
- DenseNet
- Deep Layer Aggregation
- Dual Path Network (DPN)
- ECA-ResNet
- EfficientNet
- EfficientNet (Knapsack Pruned)
- Ensemble Adversarial Inception ResNet v2
- ESE-VoVNet (Partial support with static shapes)
- FBNet
- (Gluon) Inception v3
- (Gluon) ResNet
- (Gluon) ResNeXt
- (Gluon) SENet
- (Gluon) SE-ResNeXt
- (Gluon) Xception
- HRNet
- Instagram ResNeXt WSL
- Inception ResNet v2
- Inception v3
- Inception v4
- (Legacy) SE-ResNet
- (Legacy) SE-ResNeXt
- (Legacy) SENet
- MixNet
- MnasNet
- MobileNet v2
- MobileNet v3
- NASNet
- Noisy Student (EfficientNet)
- PNASNet
- RegNetX
- RegNetY
- Res2Net
- Res2NeXt
- ResNeSt
- ResNet
- ResNet-D
- ResNeXt
- RexNet
- SE-ResNet
- SelecSLS
- SE-ResNeXt
- SK-ResNet
- SK-ResNeXt
- SPNASNet
- SSL ResNet
- SWSL ResNet
- SWSL ResNeXt
- TResNet
- Wide ResNet
- Xception

Supported architectures from [Sentence Transformers](https://github.com/UKPLab/sentence-transformers):
- All Transformer and CLIP-based models.

### Export a model to ONNX with optimum.exporters.onnx
https://huggingface.co/docs/optimum/v0.0.1/onnx/usage_guides/export_a_model.md

# Export a model to ONNX with optimum.exporters.onnx

## Summary

Exporting a model to ONNX is as simple as

```bash
optimum-cli export onnx --model gpt2 gpt2_onnx/
```

Check out the help for more options:

```bash
optimum-cli export onnx --help
```

## Why use ONNX?

If you need to deploy 🤗 Transformers or 🤗 Diffusers models in production environments, we recommend
exporting them to a serialized format that can be loaded and executed on specialized
runtimes and hardware. In this guide, we'll show you how to export these
models to [ONNX (Open Neural Network eXchange)](http://onnx.ai).

ONNX is an open standard that defines a common set of operators and a common file format
to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construct a computational graph (often called an _intermediate representation_) which
represents the flow of data through the neural network.

By exposing a graph with standardized operators and data types, ONNX makes it easy to
switch between frameworks. For example, a model trained in PyTorch can be exported to
ONNX format and then imported in TensorRT or OpenVINO.

<Tip>

Once exported, a model can be optimized for inference via techniques such as
graph optimization and quantization. Check the `optimum.onnxruntime` subpackage to optimize and run ONNX models!

</Tip>

🤗 Optimum provides support for the ONNX export by leveraging configuration objects.
These configuration objects come ready made for a number of model architectures, and are
designed to be easily extendable to other architectures.

**To check the supported architectures, go to the [configuration reference page](../package_reference/configuration#supported-architectures).**

## Exporting a model to ONNX using the CLI

To export a 🤗 Transformers or 🤗 Diffusers model to ONNX, you'll first need to install some extra
dependencies:

```bash
pip install optimum[onnx]
```

The Optimum ONNX export can be used through Optimum command-line:

```bash
optimum-cli export onnx --help

usage: optimum-cli <command> [<args>] export onnx [-h] -m MODEL [--task TASK] [--monolith] [--device DEVICE] [--opset OPSET] [--atol ATOL]
                                                  [--framework {pt}] [--pad_token_id PAD_TOKEN_ID] [--cache_dir CACHE_DIR] [--trust-remote-code]
                                                  [--no-post-process] [--optimize {O1,O2,O3,O4}] [--batch_size BATCH_SIZE]
                                                  [--sequence_length SEQUENCE_LENGTH] [--num_choices NUM_CHOICES] [--width WIDTH] [--height HEIGHT]
                                                  [--num_channels NUM_CHANNELS] [--feature_size FEATURE_SIZE] [--nb_max_frames NB_MAX_FRAMES]
                                                  [--audio_sequence_length AUDIO_SEQUENCE_LENGTH]
                                                  output

optional arguments:
  -h, --help            show this help message and exit

Required arguments:
  -m MODEL, --model MODEL
                        Model ID on huggingface.co or path on disk to load model from.
  output                Path indicating the directory where to store generated ONNX model.

Optional arguments:
  --task TASK           The task to export the model for. If not specified, the task will be auto-inferred based on the model. Available tasks depend on the model, but are among: ['default', 'fill-mask', 'text-generation', 'text2text-generation', 'text-classification', 'token-classification', 'multiple-choice', 'object-detection', 'question-answering', 'image-classification', 'image-segmentation', 'masked-im', 'semantic-segmentation', 'automatic-speech-recognition', 'audio-classification', 'audio-frame-classification', 'automatic-speech-recognition', 'audio-xvector', 'image-to-text', 'zero-shot-object-detection', 'image-to-image', 'inpainting', 'text-to-image']. For decoder models, use `xxx-with-past` to export the model using past key values in the decoder.
  --monolith            Force to export the model as a single ONNX file. By default, the ONNX exporter may break the model in several ONNX files, for example for encoder-decoder models where the encoder should be run only once while the decoder is looped over.
  --device DEVICE       The device to use to do the export. Defaults to "cpu".
  --opset OPSET         If specified, ONNX opset version to export the model with. Otherwise, the default opset will be used.
  --atol ATOL           If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol for the model will be used.
  --framework {pt}      The framework to use for the ONNX export. If not provided, will attempt to use the local checkpoint's original framework or what is available in the environment.
  --pad_token_id PAD_TOKEN_ID
                        This is needed by some models, for some tasks. If not provided, will attempt to use the tokenizer to guess it.
  --cache_dir CACHE_DIR
                        Path indicating where to store cache.
  --trust-remote-code   Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository.
  --no-post-process     Allows to disable any post-processing done by default on the exported ONNX models. For example, the merging of decoder and decoder-with-past models into a single ONNX model file to reduce memory usage.
  --optimize {O1,O2,O3,O4}
                        Allows to run ONNX Runtime optimizations directly during the export. Some of these optimizations are specific to ONNX Runtime, and the resulting ONNX will not be usable with other runtime as OpenVINO or TensorRT. Possible options:
                            - O1: Basic general optimizations
                            - O2: Basic and extended general optimizations, transformers-specific fusions
                            - O3: Same as O2 with GELU approximation
                            - O4: Same as O3 with mixed precision (fp16, GPU-only, requires `--device cuda`)

```

Exporting a checkpoint can be done as follows:

```bash
optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/
```

You should see the following logs (along with potential logs from PyTorch that were hidden here for clarity):

```bash
Automatic task detection to question-answering.
Framework not specified. Using pt to export the model.
Using framework PyTorch: 1.12.1

Validating ONNX model...
        -[✓] ONNX model output names match reference model (start_logits, end_logits)
        - Validating ONNX Model output "start_logits":
                -[✓] (2, 16) matches (2, 16)
                -[✓] all values close (atol: 0.0001)
        - Validating ONNX Model output "end_logits":
                -[✓] (2, 16) matches (2, 16)
                -[✓] all values close (atol: 0.0001)
All good, model saved at: distilbert_base_uncased_squad_onnx/model.onnx
```

This exports an ONNX graph of the checkpoint defined by the `--model` argument.
As you can see, the task was automatically detected. This was possible because the model was on the Hub.

For local models, providing the `--task` argument is needed or it will default to the model architecture without any task specific head:

```bash
optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/
```

Note that providing the `--task` argument for a model on the Hub will disable the automatic task detection.

The resulting `model.onnx` file can then be run on one of the [many
accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX
standard. For example, we can load and run the model with [ONNX
Runtime](https://onnxruntime.ai/) using the `optimum.onnxruntime` package as follows:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx")
>>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt")
>>> outputs = model(**inputs)
```

Printing the outputs would give that:

```bash
QuestionAnsweringModelOutput(loss=None, start_logits=tensor([[-4.7652, -1.0452, -7.0409, -4.6864, -4.0277, -6.2021, -4.9473,  2.6287,
          7.6111, -1.2488, -2.0551, -0.9350,  4.9758, -0.7707,  2.1493, -2.0703,
         -4.3232, -4.9472]]), end_logits=tensor([[ 0.4382, -1.6502, -6.3654, -6.0661, -4.1482, -3.5779, -0.0774, -3.6168,
         -1.8750, -2.8910,  6.2582,  0.5425, -3.7699,  3.8232, -1.5073,  6.2311,
          3.3604, -0.0772]]), hidden_states=None, attentions=None)
```

As you can see, converting a model to ONNX does not mean leaving the Hugging Face ecosystem. You end up with a similar API as regular 🤗 Transformers models!

<Tip>

It is also possible to export the model to ONNX directly from the `ORTModelForQuestionAnswering` class by doing the following:

```python
>>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad", export=True)
```

For more information, check the `optimum.onnxruntime` documentation [page on this topic](/onnxruntime/overview).

</Tip>

### Exporting a model to be used with Optimum's ORTModel

Models exported through `optimum-cli export onnx` can be used directly in [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel). This is especially useful for encoder-decoder models, where in this case the export will split the encoder and decoder into two `.onnx` files, as the encoder is usually only run once while the decoder may be run several times in autogenerative tasks.

### Exporting a model using past keys/values in the decoder

When exporting a decoder model used for generation, it can be useful to encapsulate in the exported ONNX the [reuse of past keys and values](https://discuss.huggingface.co/t/what-is-the-purpose-of-use-cache-in-decoder/958/2). This allows to avoid recomputing the same intermediate activations during the generation.

In the ONNX export, the past keys/values are reused by default. This behavior corresponds to `--task text2text-generation-with-past`, `--task text-generation-with-past`, or `--task automatic-speech-recognition-with-past`. If for any purpose you would like to disable the export with past keys/values reuse, passing explicitly to `optimum-cli export onnx` the task `text2text-generation`, `text-generation` or `automatic-speech-recognition` is required.

A model exported using past key/values can be reused directly into Optimum's [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel):

```bash
optimum-cli export onnx --model gpt2 gpt2_onnx/
```

and

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("./gpt2_onnx/")
>>> model = ORTModelForCausalLM.from_pretrained("./gpt2_onnx/")
>>> inputs = tokenizer("My name is Arthur and I live in", return_tensors="pt")
>>> gen_tokens = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_tokens))
# prints ['My name is Arthur and I live in the United States of America. I am a member of the']
```

## Selecting a task

Specifying a `--task` should not be necessary in most cases when exporting from a model on the Hugging Face Hub.

However, in case you need to check for a given a model architecture what tasks the ONNX export supports, we got you covered. First, you can check the list of supported tasks for both PyTorch [here](/exporters/task_manager).

For each model architecture, you can find the list of supported tasks via the `TasksManager`. For example, for DistilBERT, for the ONNX export, we have:

```python
>>> from optimum.exporters.tasks import TasksManager

>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type("distilbert", "onnx").keys())
>>> print(distilbert_tasks)
['default', 'fill-mask', 'text-classification', 'multiple-choice', 'token-classification', 'question-answering']
```

You can then pass one of these tasks to the `--task` argument in the `optimum-cli export onnx` command, as mentioned above.

## Custom export of Transformers models

### Customize the export of official Transformers models

Optimum allows for advanced users a finer-grained control over the configuration for the ONNX export. This is especially useful if you would like to export models with different keyword arguments, for example using `output_attentions=True` or `output_hidden_states=True`.

To support these use cases, `~exporters.main_export` supports two arguments: `model_kwargs` and `custom_onnx_configs`, which are used in the following fashion:

* `model_kwargs` allows to override some of the default arguments to the models `forward`, in practice as `model(**reference_model_inputs, **model_kwargs)`.
* `custom_onnx_configs` should be a `Dict[str, OnnxConfig]`, mapping from the submodel name (usually `model`, `encoder_model`, `decoder_model`, or `decoder_model_with_past` - [reference](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/constants.py)) to a custom ONNX configuration for the given submodel.

A complete example is given below, allowing to export models with `output_attentions=True`.

```python
from optimum.exporters.onnx import main_export
from optimum.exporters.onnx.model_configs import WhisperOnnxConfig
from transformers import AutoConfig

from optimum.exporters.onnx.base import ConfigBehavior
from typing import Dict

class CustomWhisperOnnxConfig(WhisperOnnxConfig):
    @property
    def outputs(self) -> Dict[str, Dict[int, str]]:
        common_outputs = super().outputs

        if self._behavior is ConfigBehavior.ENCODER:
            for i in range(self._config.encoder_layers):
                common_outputs[f"encoder_attentions.{i}"] = {0: "batch_size"}
        elif self._behavior is ConfigBehavior.DECODER:
            for i in range(self._config.decoder_layers):
                common_outputs[f"decoder_attentions.{i}"] = {
                    0: "batch_size",
                    2: "decoder_sequence_length",
                    3: "past_decoder_sequence_length + 1"
                }
            for i in range(self._config.decoder_layers):
                common_outputs[f"cross_attentions.{i}"] = {
                    0: "batch_size",
                    2: "decoder_sequence_length",
                    3: "encoder_sequence_length_out"
                }

        return common_outputs

    @property
    def torch_to_onnx_output_map(self):
        if self._behavior is ConfigBehavior.ENCODER:
            # The encoder export uses WhisperEncoder that returns the key "attentions"
            return {"attentions": "encoder_attentions"}
        else:
            return {}

model_id = "openai/whisper-tiny.en"
config = AutoConfig.from_pretrained(model_id)

custom_whisper_onnx_config = CustomWhisperOnnxConfig(
        config=config,
        task="automatic-speech-recognition",
)

encoder_config = custom_whisper_onnx_config.with_behavior("encoder")
decoder_config = custom_whisper_onnx_config.with_behavior("decoder", use_past=False)
decoder_with_past_config = custom_whisper_onnx_config.with_behavior("decoder", use_past=True)

custom_onnx_configs={
    "encoder_model": encoder_config,
    "decoder_model": decoder_config,
    "decoder_with_past_model": decoder_with_past_config,
}

main_export(
    model_id,
    output="custom_whisper_onnx",
    no_post_process=True,
    model_kwargs={"output_attentions": True},
    custom_onnx_configs=custom_onnx_configs
)
```

For tasks that require only a single ONNX file (e.g. encoder-only), an exported model with custom inputs/outputs can then be used with the class [optimum.onnxruntime.ORTModelForCustomTasks](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModelForCustomTasks) for inference with ONNX Runtime on CPU or GPU.

### Customize the export of Transformers models with custom modeling

Optimum supports the export of Transformers models with custom modeling that use [`trust_remote_code=True`](https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoModel.from_pretrained.trust_remote_code), not officially supported in the Transformers library but usable with its functionality as [pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) and [generation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationMixin.generate).

Examples of such models are [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) and [mosaicml/mpt-30b](https://huggingface.co/mosaicml/mpt-30b).

To export custom models, a dictionary `custom_onnx_configs` needs to be passed to [main_export()](/docs/optimum/v0.0.1/en/onnx/package_reference/export#optimum.exporters.onnx.main_export), with the ONNX config definition for all the subparts of the model to export (for example, encoder and decoder subparts). The example below allows to export `mosaicml/mpt-7b` model:

```python
from optimum.exporters.onnx import main_export

from transformers import AutoConfig

from optimum.exporters.onnx.config import TextDecoderOnnxConfig
from optimum.utils import NormalizedTextConfig, DummyPastKeyValuesGenerator
from typing import Dict


class MPTDummyPastKeyValuesGenerator(DummyPastKeyValuesGenerator):
    """
    MPT swaps the two last dimensions for the key cache compared to usual transformers
    decoder models, thus the redefinition here.
    """
    def generate(self, input_name: str, framework: str = "pt"):
        past_key_shape = (
            self.batch_size,
            self.num_attention_heads,
            self.hidden_size // self.num_attention_heads,
            self.sequence_length,
        )
        past_value_shape = (
            self.batch_size,
            self.num_attention_heads,
            self.sequence_length,
            self.hidden_size // self.num_attention_heads,
        )
        return [
            (
                self.random_float_tensor(past_key_shape, framework=framework),
                self.random_float_tensor(past_value_shape, framework=framework),
            )
            for _ in range(self.num_layers)
        ]

class CustomMPTOnnxConfig(TextDecoderOnnxConfig):
    DUMMY_INPUT_GENERATOR_CLASSES = (MPTDummyPastKeyValuesGenerator,) + TextDecoderOnnxConfig.DUMMY_INPUT_GENERATOR_CLASSES
    DUMMY_PKV_GENERATOR_CLASS = MPTDummyPastKeyValuesGenerator

    DEFAULT_ONNX_OPSET = 18
    NORMALIZED_CONFIG_CLASS = NormalizedTextConfig.with_args(
        hidden_size="d_model",
        num_layers="n_layers",
        num_attention_heads="n_heads"
    )

    def add_past_key_values(self, inputs_or_outputs: Dict[str, Dict[int, str]], direction: str):
        """
        Adapted from https://github.com/huggingface/optimum/blob/v1.9.0/optimum/exporters/onnx/base.py#L625
        """
        if direction not in ["inputs", "outputs"]:
            raise ValueError(f'direction must either be "inputs" or "outputs", but {direction} was given')

        if direction == "inputs":
            decoder_sequence_name = "past_sequence_length"
            name = "past_key_values"
        else:
            decoder_sequence_name = "past_sequence_length + 1"
            name = "present"

        for i in range(self._normalized_config.num_layers):
            inputs_or_outputs[f"{name}.{i}.key"] = {0: "batch_size", 3: decoder_sequence_name}
            inputs_or_outputs[f"{name}.{i}.value"] = {0: "batch_size", 2: decoder_sequence_name}


model_id = "fxmarty/tiny-mpt-random-remote-code"
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
onnx_config_with_past = CustomMPTOnnxConfig(config, task="text-generation", use_past=True)

custom_onnx_configs = {"model": onnx_config_with_past}

main_export(
    model_id,
    output="mpt_onnx",
    task="text-generation-with-past",
    trust_remote_code=True,
    custom_onnx_configs=custom_onnx_configs,
    no_post_process=True,
    opset=14
)
```

Moreover, the advanced argument `fn_get_submodels` to `main_export` allows to customize how the submodels are extracted in case the model needs to be exported in several submodels. Examples of such functions can be [consulted here](link to utils.py relevant code once merged).

### Adding support for an unsupported architecture
https://huggingface.co/docs/optimum/v0.0.1/onnx/usage_guides/contribute.md

# Adding support for an unsupported architecture

If you wish to export a model whose architecture is not already supported by the library, these are the main steps to follow:

1. Implement a custom ONNX configuration.
2. Register the ONNX configuration in the `~optimum.exporters.TasksManager`.
2. Export the model to ONNX.
3. Validate the outputs of the original and exported models.

In this section, we'll look at how BERT was implemented to show what's involved with each step.

## Implementing a custom ONNX configuration

Let's start with the ONNX configuration object. We provide a 3-level [class hierarchy](/exporters/onnx/package_reference/configuration),
and to add support for a model, inheriting from the right middle-end class will be the way to go most of the time. You might have to
implement a middle-end class yourself if you are adding an architecture handling a modality and/or case never seen before.

<Tip>

A good way to implement a custom ONNX configuration is to look at the existing configuration implementations in the
`optimum/exporters/onnx/model_configs.py` file.

Also, if the architecture you are trying to add is (very) similar to an architecture that is already supported
(for instance adding support for ALBERT when BERT is already supported), trying to simply inheriting from this class
might work.

</Tip>


When inheriting from a middle-end class, look for the one handling the same modality / category of models as the one you
are trying to support.

### Example: Adding support for BERT

Since BERT is an encoder-based model for text, its configuration inherits from the middle-end class [TextEncoderOnnxConfig](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.TextEncoderOnnxConfig).
In `optimum/exporters/onnx/model_configs.py`:

```python
# This class is actually in optimum/exporters/onnx/config.py
class TextEncoderOnnxConfig(OnnxConfig):
    # Describes how to generate the dummy inputs.
    DUMMY_INPUT_GENERATOR_CLASSES = (DummyTextInputGenerator,)

class BertOnnxConfig(TextEncoderOnnxConfig):
    # Specifies how to normalize the BertConfig, this is needed to access common attributes
    # during dummy input generation.
    NORMALIZED_CONFIG_CLASS = NormalizedTextConfig
    # Sets the absolute tolerance to when validating the exported ONNX model against the
    # reference model.
    ATOL_FOR_VALIDATION = 1e-4

    @property
    def inputs(self) -> Dict[str, Dict[int, str]]:
        if self.task == "multiple-choice":
            dynamic_axis = {0: "batch_size", 1: "num_choices", 2: "sequence_length"}
        else:
            dynamic_axis = {0: "batch_size", 1: "sequence_length"}
        return {
            "input_ids": dynamic_axis,
            "attention_mask": dynamic_axis,
            "token_type_ids": dynamic_axis,
        }
```

First let's explain what `TextEncoderOnnxConfig` is all about. While most of the features are already implemented in `OnnxConfig`,
this class is modality-agnostic, meaning that it does not know what kind of inputs it should handle. The way input generation is
handled is via the `DUMMY_INPUT_GENERATOR_CLASSES` attribute, which is a tuple of `DummyInputGenerator`s.
Here we are making a modality-aware configuration inheriting from `OnnxConfig` by specifying
`DUMMY_INPUT_GENERATOR_CLASSES = (DummyTextInputGenerator,)`.

Then comes the model-specific class, `BertOnnxConfig`. Two class attributes are specified here:
- `NORMALIZED_CONFIG_CLASS`: this must be a `NormalizedConfig`, it basically allows
the input generator to access the model config attributes in a generic way.
- `ATOL_FOR_VALIDATION`: it is used when validating the exported model against the original one, this is the absolute
acceptable tolerance for the output values difference.

Every configuration object must implement the [inputs](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig.inputs) property and return a mapping, where each key corresponds to an
input name, and each value indicates the axes in that input that are dynamic.
For BERT, we can see that three inputs are required: `input_ids`, `attention_mask` and `token_type_ids`.
These inputs have the same shape of `(batch_size, sequence_length)` (except for the `multiple-choice` task) which is
why we see the same axes used in the configuration.

Once you have implemented an ONNX configuration, you can instantiate it by providing the base model's configuration as follows:

```python
>>> from transformers import AutoConfig
>>> from optimum.exporters.onnx.model_configs import BertOnnxConfig
>>> config = AutoConfig.from_pretrained("bert-base-uncased")
>>> onnx_config = BertOnnxConfig(config)
```

The resulting object has several useful properties. For example, you can view the ONNX
operator set that will be used during the export:

```python
>>> print(onnx_config.DEFAULT_ONNX_OPSET)
11
```

You can also view the outputs associated with the model as follows:

```python
>>> print(onnx_config.outputs)
OrderedDict([('last_hidden_state', {0: 'batch_size', 1: 'sequence_length'})])
```

Notice that the outputs property follows the same structure as the inputs; it returns an
`OrderedDict` of named outputs and their shapes. The output structure is linked to the
choice of task that the configuration is initialised with. By default, the ONNX
configuration is initialized with the `default` task that corresponds to exporting a
model loaded with the `AutoModel` class. If you want to export a model for another task,
just provide a different task to the `task` argument when you initialize the ONNX
configuration. For example, if we wished to export BERT with a sequence
classification head, we could use:

```python
>>> from transformers import AutoConfig

>>> config = AutoConfig.from_pretrained("bert-base-uncased")
>>> onnx_config_for_seq_clf = BertOnnxConfig(config, task="text-classification")
>>> print(onnx_config_for_seq_clf.outputs)
OrderedDict([('logits', {0: 'batch_size'})])
```

<Tip>

Check out `BartOnnxConfig` for an advanced example.

</Tip>


## Registering the ONNX configuration in the TasksManager

The `TasksManager` is the main entry-point to load a model given a name and a task,
and to get the proper configuration for a given (architecture, backend) couple. When adding support for the export to ONNX,
registering the configuration to the `TasksManager` will make the export available in the command line tool.

To do that, add an entry in the `_SUPPORTED_MODEL_TYPE` attribute:
- If the model is already supported for other backends than ONNX, it will already have an entry, so you will only need to
add an `onnx` key specifying the name of the configuration class.
- Otherwise, you will have to add the whole entry.

For BERT, it looks as follows:

```python
    "bert": supported_tasks_mapping(
        "default",
        "fill-mask",
        "text-generation",
        "text-classification",
        "multiple-choice",
        "token-classification",
        "question-answering",
        onnx="BertOnnxConfig",
    )
```

## Exporting the model

Once you have implemented the ONNX configuration, the next step is to export the model.
Here we can use the `export()` function provided by the `optimum.exporters.onnx` package.
This function expects the ONNX configuration, along with the base model, and the path to save the exported file:

```python
>>> from pathlib import Path
>>> from optimum.exporters.tasks import TasksManager
>>> from optimum.exporters.onnx import export
>>> from transformers import AutoModel

>>> base_model = AutoModel.from_pretrained("bert-base-uncased")

>>> onnx_path = Path("model.onnx")
>>> onnx_config_constructor = TasksManager.get_exporter_config_constructor("onnx", base_model)
>>> onnx_config = onnx_config_constructor(base_model.config)

>>> onnx_inputs, onnx_outputs = export(base_model, onnx_config, onnx_path, onnx_config.DEFAULT_ONNX_OPSET)
```

The `onnx_inputs` and `onnx_outputs` returned by the `export()` function are lists of the keys defined in the [inputs](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig.inputs)
and [inputs](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig.inputs) properties of the configuration. Once the model is exported, you can test that the model is well formed as follows:

```python
>>> import onnx

>>> onnx_model = onnx.load("model.onnx")
>>> onnx.checker.check_model(onnx_model)
```

<Tip>

If your model is larger than 2GB, you will see that many additional files are created during the export. This is
_expected_ because ONNX uses [Protocol Buffers](https://developers.google.com/protocol-buffers/) to store the model
and these have a size limit of 2GB. See the [ONNX documentation](https://github.com/onnx/onnx/blob/master/docs/ExternalData.md)
for instructions on how to load models with external data.

</Tip>

## Validating the model outputs

The final step is to validate that the outputs from the base and exported model agree within some absolute tolerance.
Here we can use the `validate_model_outputs()` function provided by the `optimum.exporters.onnx` package:

```python
>>> from optimum.exporters.onnx import validate_model_outputs

>>> validate_model_outputs(
...     onnx_config, base_model, onnx_path, onnx_outputs, onnx_config.ATOL_FOR_VALIDATION
... )
```

## Contributing the new configuration to 🤗 Optimum

Now that the support for the architectures has been implemented, and validated, there are two things left:
1. Add your model architecture to the tests in `tests/exporters/test_onnx_export.py`
2. Create a PR on the [`optimum` repo](https://github.com/huggingface/optimum)

Thanks for you contribution!

### Configuration classes for ONNX exports
https://huggingface.co/docs/optimum/v0.0.1/onnx/package_reference/configuration.md

# Configuration classes for ONNX exports

Exporting a model to ONNX involves specifying:
1. The input names.
2. The output names.
3. The dynamic axes. These refer to the input dimensions can be changed dynamically at runtime (e.g. a batch size or sequence length).
All other axes will be treated as static, and hence fixed at runtime.
4. Dummy inputs to trace the model. This is needed in PyTorch to record the computational graph and convert it to ONNX.

Since this data depends on the choice of model and task, we represent it in terms of _configuration classes_. Each configuration class is associated with
a specific model architecture, and follows the naming convention `ArchitectureNameOnnxConfig`. For instance, the configuration which specifies the ONNX
export of BERT models is `BertOnnxConfig`.

Since many architectures share similar properties for their ONNX configuration, 🤗 Optimum adopts a 3-level class hierarchy:
1. Abstract and generic base classes. These handle all the fundamental features, while being agnostic to the modality (text, image, audio, etc).
2. Middle-end classes. These are aware of the modality, but multiple can exist for the same modality depending on the inputs they support.
They specify which input generators should be used for the dummy inputs, but remain model-agnostic.
3. Model-specific classes like the `BertOnnxConfig` mentioned above. These are the ones actually used to export models.


## Base classes[[optimum.exporters.onnx.OnnxConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.OnnxConfig</name><anchor>optimum.exporters.onnx.OnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/base.py#L92</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inputs</name><anchor>optimum.exporters.onnx.OnnxConfig.inputs</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/base.py#L164</source><parameters>[]</parameters><rettype>`Dict[str, Dict[int, str]]`</rettype><retdesc>A mapping of each input name to a mapping of axis position to the axes symbolic name.</retdesc></docstring>

Dict containing the axis definition of the input tensors to provide to the model.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>outputs</name><anchor>optimum.exporters.onnx.OnnxConfig.outputs</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/base.py#L175</source><parameters>[]</parameters><rettype>`Dict[str, Dict[int, str]]`</rettype><retdesc>A mapping of each output name to a mapping of axis position to the axes symbolic name.</retdesc></docstring>

Dict containing the axis definition of the output tensors to provide to the model.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_dummy_inputs</name><anchor>optimum.exporters.onnx.OnnxConfig.generate_dummy_inputs</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/base.py#L223</source><parameters>[{"name": "framework", "val": ": str = 'pt'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **framework** (`str`, defaults to `"pt"`) --
  The framework for which to create the dummy inputs.
- **batch_size** (`int`, defaults to 2) --
  The batch size to use in the dummy inputs.
- **sequence_length** (`int`, defaults to 16) --
  The sequence length to use in the dummy inputs.
- **num_choices** (`int`, defaults to 4) --
  The number of candidate answers provided for multiple choice task.
- **image_width** (`int`, defaults to 64) --
  The width to use in the dummy inputs for vision tasks.
- **image_height** (`int`, defaults to 64) --
  The height to use in the dummy inputs for vision tasks.
- **num_channels** (`int`, defaults to 3) --
  The number of channels to use in the dummpy inputs for vision tasks.
- **feature_size** (`int`, defaults to 80) --
  The number of features to use in the dummpy inputs for audio tasks in case it is not raw audio.
  This is for example the number of STFT bins or MEL bins.
- **nb_max_frames** (`int`, defaults to 3000) --
  The number of frames to use in the dummpy inputs for audio tasks in case the input is not raw audio.
- **audio_sequence_length** (`int`, defaults to 16000) --
  The number of frames to use in the dummpy inputs for audio tasks in case the input is raw audio.</paramsdesc><paramgroups>0</paramgroups><rettype>`Dict[str, [tf.Tensor, torch.Tensor]]`</rettype><retdesc>A dictionary mapping the input names to dummy tensors in the proper framework format.</retdesc></docstring>

Generates the dummy inputs necessary for tracing the model. If not explicitly specified, default input shapes are used.








</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.OnnxConfigWithPast</name><anchor>optimum.exporters.onnx.OnnxConfigWithPast</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/base.py#L426</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}, {"name": "use_past", "val": ": bool = False"}, {"name": "use_past_in_inputs", "val": ": bool = False"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}]</parameters></docstring>
Inherits from [OnnxConfig](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig). A base class to handle the ONNX configuration of decoder-only models.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_past_key_values</name><anchor>optimum.exporters.onnx.OnnxConfigWithPast.add_past_key_values</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/base.py#L551</source><parameters>[{"name": "inputs_or_outputs", "val": ": dict[str, dict[int, str]]"}, {"name": "direction", "val": ": str"}]</parameters><paramsdesc>- **inputs_or_outputs** (`Dict[str, Dict[int, str]]`) --
  The mapping to fill.
- **direction** (`str`) --
  either "inputs" or "outputs", it specifies whether `input_or_outputs` is the input mapping or the
  output mapping, this is important for axes naming.</paramsdesc><paramgroups>0</paramgroups></docstring>
Fills `input_or_outputs` mapping with past_key_values dynamic axes considering the direction.




</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.OnnxSeq2SeqConfigWithPast</name><anchor>optimum.exporters.onnx.OnnxSeq2SeqConfigWithPast</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/base.py#L623</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}, {"name": "use_past", "val": ": bool = False"}, {"name": "use_past_in_inputs", "val": ": bool = False"}, {"name": "behavior", "val": ": ConfigBehavior = <ConfigBehavior.MONOLITH: 'monolith'>"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}]</parameters></docstring>
Inherits from [OnnxConfigWithPast](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfigWithPast). A base class to handle the ONNX configuration of encoder-decoder models.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>with_behavior</name><anchor>optimum.exporters.onnx.OnnxSeq2SeqConfigWithPast.with_behavior</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/base.py#L655</source><parameters>[{"name": "behavior", "val": ": str | ConfigBehavior"}, {"name": "use_past", "val": ": bool = False"}, {"name": "use_past_in_inputs", "val": ": bool = False"}]</parameters><paramsdesc>- **behavior** (`ConfigBehavior`) --
  The behavior to use for the new instance.
- **use_past** (`bool`, defaults to `False`) --
  Whether or not the ONNX config to instantiate is for a model using KV cache.
- **use_past_in_inputs** (`bool`, defaults to `False`) --
  Whether the KV cache is to be passed as an input to the ONNX.</paramsdesc><paramgroups>0</paramgroups><rettype>`OnnxSeq2SeqConfigWithPast`</rettype></docstring>
Creates a copy of the current OnnxConfig but with a different `ConfigBehavior` and `use_past` value.








</div></div>

## Middle-end classes

### Text[[optimum.exporters.onnx.TextEncoderOnnxConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.TextEncoderOnnxConfig</name><anchor>optimum.exporters.onnx.TextEncoderOnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/config.py#L47</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}]</parameters></docstring>
Handles encoder-based text architectures.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.TextDecoderOnnxConfig</name><anchor>optimum.exporters.onnx.TextDecoderOnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/config.py#L53</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}, {"name": "use_past", "val": ": bool = False"}, {"name": "use_past_in_inputs", "val": ": bool = False"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}]</parameters></docstring>
Handles decoder-based text architectures.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.TextSeq2SeqOnnxConfig</name><anchor>optimum.exporters.onnx.TextSeq2SeqOnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/config.py#L157</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}, {"name": "use_past", "val": ": bool = False"}, {"name": "use_past_in_inputs", "val": ": bool = False"}, {"name": "behavior", "val": ": ConfigBehavior = <ConfigBehavior.MONOLITH: 'monolith'>"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}]</parameters></docstring>
Handles encoder-decoder-based text architectures.

</div>

### Vision[[optimum.exporters.onnx.config.VisionOnnxConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.config.VisionOnnxConfig</name><anchor>optimum.exporters.onnx.config.VisionOnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/config.py#L206</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}]</parameters></docstring>
Handles vision architectures.

</div>

### Multi-modal[[optimum.exporters.onnx.config.TextAndVisionOnnxConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.exporters.onnx.config.TextAndVisionOnnxConfig</name><anchor>optimum.exporters.onnx.config.TextAndVisionOnnxConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/config.py#L212</source><parameters>[{"name": "config", "val": ": PretrainedConfig"}, {"name": "task", "val": ": str = 'feature-extraction'"}, {"name": "preprocessors", "val": ": list[Any] | None = None"}, {"name": "int_dtype", "val": ": str = 'int64'"}, {"name": "float_dtype", "val": ": str = 'fp32'"}]</parameters></docstring>
Handles multi-modal text and vision architectures.

</div>

### Export functions
https://huggingface.co/docs/optimum/v0.0.1/onnx/package_reference/export.md

# Export functions

You can export models to ONNX from PyTorch. 
There is an export function for PyTorch models [export_pytorch()](/docs/optimum/v0.0.1/en/onnx/package_reference/export#optimum.exporters.onnx.convert.export_pytorch), 
but the recommended way of using those is via the main export function `~optimum.exporters.main_export`, 
which will take care of using the proper exporting function according to the available framework, 
check that the exported model is valid, and provide extended options to run optimizations on the exported model.

## Main functions[[optimum.exporters.onnx.main_export]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.onnx.main_export</name><anchor>optimum.exporters.onnx.main_export</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/__main__.py#L57</source><parameters>[{"name": "model_name_or_path", "val": ": str"}, {"name": "output", "val": ": str | Path"}, {"name": "task", "val": ": str = 'auto'"}, {"name": "opset", "val": ": int | None = None"}, {"name": "device", "val": ": str = 'cpu'"}, {"name": "dtype", "val": ": str | None = None"}, {"name": "optimize", "val": ": str | None = None"}, {"name": "monolith", "val": ": bool = False"}, {"name": "no_post_process", "val": ": bool = False"}, {"name": "framework", "val": ": str | None = 'pt'"}, {"name": "atol", "val": ": float | None = None"}, {"name": "pad_token_id", "val": ": int | None = None"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "force_download", "val": ": bool = False"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": bool | str | None = None"}, {"name": "do_validation", "val": ": bool = True"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}, {"name": "custom_onnx_configs", "val": ": dict[str, OnnxConfig] | None = None"}, {"name": "fn_get_submodels", "val": ": Callable | None = None"}, {"name": "use_subprocess", "val": ": bool = False"}, {"name": "_variant", "val": ": str = 'default'"}, {"name": "library_name", "val": ": str | None = None"}, {"name": "no_dynamic_axes", "val": ": bool = False"}, {"name": "do_constant_folding", "val": ": bool = True"}, {"name": "slim", "val": ": bool = False"}, {"name": "**kwargs_shapes", "val": ""}]</parameters><paramsdesc></paramsdesc><paramsdesc1title>Required parameters</paramsdesc1title><paramsdesc1>

- **model_name_or_path** (`str`) --
  Model ID on huggingface.co or path on disk to the model repository to export. Example: `model_name_or_path="BAAI/bge-m3"` or `mode_name_or_path="/path/to/model_folder`.
- **output** (`Union[str, Path]`) --
  Path indicating the directory where to store the generated ONNX model.

</paramsdesc1><paramsdesc2title>Optional parameters</paramsdesc2title><paramsdesc2>

- **task** (`Optional[str]`, defaults to `None`) --
  The task to export the model for. If not specified, the task will be auto-inferred based on the model. For decoder models,
  use `xxx-with-past` to export the model using past key values in the decoder.
- **opset** (`Optional[int]`, defaults to `None`) --
  If specified, ONNX opset version to export the model with. Otherwise, the default opset for the given model architecture
  will be used.
- **device** (`str`, defaults to `"cpu"`) --
  The device to use to do the export. Defaults to "cpu".
- **dtype** (`Optional[str]`, defaults to `None`) --
  The floating point precision to use for the export. Supported options: `"fp32"` (float32), `"fp16"` (float16), `"bf16"` (bfloat16). Defaults to `"fp32"`.
- **optimize** (`Optional[str]`, defaults to `None`) --
  Allows to run ONNX Runtime optimizations directly during the export. Some of these optimizations are specific to
  ONNX Runtime, and the resulting ONNX will not be usable with other runtime as OpenVINO or TensorRT.
  Available options: `"O1", "O2", "O3", "O4"`. Reference: [AutoOptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.AutoOptimizationConfig)
- **monolith** (`bool`, defaults to `False`) --
  Forces to export the model as a single ONNX file.
- **no_post_process** (`bool`, defaults to `False`) --
  Allows to disable any post-processing done by default on the exported ONNX models.
- **framework** (`Optional[str]`, defaults to `None`) --
  The framework to use for the ONNX export (`"pt"`). If not provided, will attempt to automatically detect the framework for the checkpoint.
- **atol** (`Optional[float]`, defaults to `None`) --
  If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol for the model will be used.
- **cache_dir** (`Optional[str]`, defaults to `None`) --
  Path indicating where to store cache. The default Hugging Face cache path will be used by default.
- **trust_remote_code** (`bool`, defaults to `False`) --
  Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories
  you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the
  model repository.
- **pad_token_id** (`Optional[int]`, defaults to `None`) --
  This is needed by some models, for some tasks. If not provided, will attempt to use the tokenizer to guess it.
- **subfolder** (`str`, defaults to `""`) --
  In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can
  specify the folder name here.
- **revision** (`str`, defaults to `"main"`) --
  Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
- **force_download** (`bool`, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **local_files_only** (`Optional[bool]`, defaults to `False`) --
  Whether or not to only look at local files (i.e., do not try to download the model).
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).
- **do_validation** (`bool`, defaults to `True`) --
  Whether or not to validate the exported ONNX model by running inference on it.
- **model_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Experimental usage: keyword arguments to pass to the model during
  the export. This argument should be used along the `custom_onnx_configs` argument
  in case, for example, the model inputs/outputs are changed (for example, if
  `model_kwargs={"output_attentions": True}` is passed).
- **custom_onnx_configs** (`Optional[Dict[str, OnnxConfig]]`, defaults to `None`) --
  Experimental usage: override the default ONNX config used for the given model. This argument may be useful for advanced users that desire a finer-grained control on the export. An example is available [here](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model).
- **fn_get_submodels** (`Optional[Callable]`, defaults to `None`) --
  Experimental usage: Override the default submodels that are used at the export. This is
  especially useful when exporting a custom architecture that needs to split the ONNX (e.g. encoder-decoder). If unspecified with custom models, optimum will try to use the default submodels used for the given task, with no guarantee of success.
- **use_subprocess** (`bool`, defaults to `False`) --
  Do the ONNX exported model validation in subprocesses. This is especially useful when
  exporting on CUDA device, where ORT does not release memory at inference session
  destruction. When set to `True`, the `main_export` call should be guarded in
  `if __name__ == "__main__":` block.
- **_variant** (`str`, defaults to `default`) --
  Specify the variant of the ONNX export to use.
- **library_name** (`Optional[str]`, defaults to `None`) --
  The library of the model (`"transformers"` or `"diffusers"` or `"timm"` or `"sentence_transformers"`). If not provided, will attempt to automatically detect the library name for the checkpoint.
- **no_dynamic_axes** (bool, defaults to `False`) --
  If True, disables the use of dynamic axes during ONNX export.
- **do_constant_folding** (bool, defaults to `True`) --
  PyTorch-specific argument. If `True`, the PyTorch ONNX export will fold constants into adjacent nodes, if possible.
- **slim** (bool, defaults to `False`) --
  PyTorch-specific argument. If `True`, use onnxslim to optimize the ONNX model.
- ****kwargs_shapes** (`Dict`) --
  Shapes to use during inference. This argument allows to override the default shapes used during the ONNX export.</paramsdesc2><paramgroups>2</paramgroups></docstring>
Full-suite ONNX export function, exporting **from a model ID on Hugging Face Hub or a local model repository**.



<ExampleCodeBlock anchor="optimum.exporters.onnx.main_export.example">

Example usage:
```python
>>> from optimum.exporters.onnx import main_export

>>> main_export("gpt2", output="gpt2_onnx/")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.onnx.onnx_export_from_model</name><anchor>optimum.exporters.onnx.onnx_export_from_model</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/convert.py#L801</source><parameters>[{"name": "model", "val": ": PreTrainedModel | DiffusionPipeline"}, {"name": "output", "val": ": str | Path"}, {"name": "opset", "val": ": int | None = None"}, {"name": "optimize", "val": ": str | None = None"}, {"name": "monolith", "val": ": bool = False"}, {"name": "no_post_process", "val": ": bool = False"}, {"name": "atol", "val": ": float | None = None"}, {"name": "do_validation", "val": ": bool = True"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}, {"name": "custom_onnx_configs", "val": ": dict[str, OnnxConfig] | None = None"}, {"name": "fn_get_submodels", "val": ": Callable | None = None"}, {"name": "_variant", "val": ": str = 'default'"}, {"name": "preprocessors", "val": ": list | None = None"}, {"name": "device", "val": ": str = 'cpu'"}, {"name": "no_dynamic_axes", "val": ": bool = False"}, {"name": "task", "val": ": str | None = None"}, {"name": "use_subprocess", "val": ": bool = False"}, {"name": "do_constant_folding", "val": ": bool = True"}, {"name": "slim", "val": ": bool = False"}, {"name": "**kwargs_shapes", "val": ""}]</parameters><paramsdesc></paramsdesc><paramsdesc1title>Required parameters</paramsdesc1title><paramsdesc1>

- **model** (`Union["PreTrainedModel", "DiffusionPipeline"]`) --
  PyTorch model to export to ONNX.
- **output** (`Union[str, Path]`) --
  Path indicating the directory where to store the generated ONNX model.

</paramsdesc1><paramsdesc2title>Optional parameters</paramsdesc2title><paramsdesc2>

- **task** (`Optional[str]`, defaults to `None`) --
  The task to export the model for. If not specified, the task will be auto-inferred based on the model.
- **opset** (`Optional[int]`, defaults to `None`) --
  If specified, ONNX opset version to export the model with. Otherwise, the default opset for the given model architecture
  will be used.
- **device** (`str`, defaults to `"cpu"`) --
  The device to use to do the export. Defaults to "cpu".
- **optimize** (`Optional[str]`, defaults to `None`) --
  Allows to run ONNX Runtime optimizations directly during the export. Some of these optimizations are specific to
  ONNX Runtime, and the resulting ONNX will not be usable with other runtime as OpenVINO or TensorRT.
  Available options: `"O1", "O2", "O3", "O4"`. Reference: [AutoOptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.AutoOptimizationConfig)
- **monolith** (`bool`, defaults to `False`) --
  Forces to export the model as a single ONNX file.
- **no_post_process** (`bool`, defaults to `False`) --
  Allows to disable any post-processing done by default on the exported ONNX models.
- **atol** (`Optional[float]`, defaults to `None`) --
  If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol for the model will be used.
- **do_validation** (`bool`, defaults to `True`) --
  If `True`, the exported ONNX model will be validated against the original PyTorch model.
- **model_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Experimental usage: keyword arguments to pass to the model during
  the export. This argument should be used along the `custom_onnx_configs` argument
  in case, for example, the model inputs/outputs are changed (for example, if
  `model_kwargs={"output_attentions": True}` is passed).
- **custom_onnx_configs** (`Optional[Dict[str, OnnxConfig]]`, defaults to `None`) --
  Experimental usage: override the default ONNX config used for the given model. This argument may be useful for advanced users that desire a finer-grained control on the export. An example is available [here](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model).
- **fn_get_submodels** (`Optional[Callable]`, defaults to `None`) --
  Experimental usage: Override the default submodels that are used at the export. This is
  especially useful when exporting a custom architecture that needs to split the ONNX (e.g. encoder-decoder). If unspecified with custom models, optimum will try to use the default submodels used for the given task, with no guarantee of success.
- **use_subprocess** (`bool`, defaults to `False`) --
  Do the ONNX exported model validation in subprocesses. This is especially useful when
  exporting on CUDA device, where ORT does not release memory at inference session
  destruction. When set to `True`, the `main_export` call should be guarded in
  `if __name__ == "__main__":` block.
- **_variant** (`str`, defaults to `default`) --
  Specify the variant of the ONNX export to use.
- **preprocessors** (`Optional[List]`, defaults to `None`) --
  List of preprocessors to use for the ONNX export.
- **no_dynamic_axes** (bool, defaults to `False`) --
  If True, disables the use of dynamic axes during ONNX export.
- **do_constant_folding** (bool, defaults to `True`) --
  PyTorch-specific argument. If `True`, the PyTorch ONNX export will fold constants into adjacent nodes, if possible.
- **slim** (bool, defaults to `False`) --
  Use onnxslim to optimize the ONNX model.
- ****kwargs_shapes** (`Dict`) --
  Shapes to use during inference. This argument allows to override the default shapes used during the ONNX export.</paramsdesc2><paramgroups>2</paramgroups></docstring>
Full-suite ONNX export function, exporting **from a pre-loaded PyTorch model**. This function is especially useful in case one needs to do modifications on the model, as overriding a forward call, before exporting to ONNX.



<ExampleCodeBlock anchor="optimum.exporters.onnx.onnx_export_from_model.example">

Example usage:
```python
>>> from transformers import AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> # At this point, we could override some submodules, forward methods, weights, etc. from the model.

>>> onnx_export_from_model(model, output="gpt2_onnx/")
```

</ExampleCodeBlock>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.onnx.export</name><anchor>optimum.exporters.onnx.export</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/convert.py#L698</source><parameters>[{"name": "model", "val": ": PreTrainedModel | ModelMixin"}, {"name": "config", "val": ": OnnxConfig"}, {"name": "output", "val": ": Path"}, {"name": "opset", "val": ": int | None = None"}, {"name": "device", "val": ": str = 'cpu'"}, {"name": "input_shapes", "val": ": dict | None = None"}, {"name": "disable_dynamic_axes_fix", "val": ": bool | None = False"}, {"name": "dtype", "val": ": str | None = None"}, {"name": "no_dynamic_axes", "val": ": bool = False"}, {"name": "do_constant_folding", "val": ": bool = True"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}]</parameters><paramsdesc>- **model** (`PreTrainedModel` or `ModelMixin`) --
  The model to export.
- **config** ([OnnxConfig](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig)) --
  The ONNX configuration associated with the exported model.
- **output** (`Path`) --
  Directory to store the exported ONNX model.
- **opset** (`Optional[int]`, defaults to `None`) --
  The version of the ONNX operator set to use.
- **device** (`Optional[str]`, defaults to `"cpu"`) --
  The device on which the ONNX model will be exported. Either `cpu` or `cuda`. Only PyTorch is supported for
  export on CUDA devices.
- **input_shapes** (`Optional[Dict]`, defaults to `None`) --
  If specified, allows to use specific shapes for the example input provided to the ONNX exporter.
- **disable_dynamic_axes_fix** (`Optional[bool]`, defaults to `False`) --
  Whether to disable the default dynamic axes fixing.
- **dtype** (`Optional[str]`, defaults to `None`) --
  Data type to remap the model inputs to. PyTorch-only. Only `fp16` is supported.
- **no_dynamic_axes** (bool, defaults to `False`) --
  If True, disables the use of dynamic axes during ONNX export.
- **do_constant_folding** (bool, defaults to `True`) --
  PyTorch-specific argument. If `True`, the PyTorch ONNX export will fold constants into adjacent nodes, if possible.
- **model_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Experimental usage: keyword arguments to pass to the model during
  the export. This argument should be used along the `custom_onnx_config` argument
  in case, for example, the model inputs/outputs are changed (for example, if
  `model_kwargs={"output_attentions": True}` is passed).</paramsdesc><paramgroups>0</paramgroups><rettype>`Tuple[List[str], List[str]]`</rettype><retdesc>A tuple with an ordered list of the model's inputs, and the named outputs from
the ONNX configuration.</retdesc></docstring>
Exports a Pytorch model to an ONNX Intermediate Representation.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.onnx.convert.export_pytorch</name><anchor>optimum.exporters.onnx.convert.export_pytorch</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/convert.py#L462</source><parameters>[{"name": "model", "val": ": PreTrainedModel | ModelMixin"}, {"name": "config", "val": ": OnnxConfig"}, {"name": "opset", "val": ": int"}, {"name": "output", "val": ": Path"}, {"name": "device", "val": ": str = 'cpu'"}, {"name": "input_shapes", "val": ": dict | None = None"}, {"name": "no_dynamic_axes", "val": ": bool = False"}, {"name": "do_constant_folding", "val": ": bool = True"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}]</parameters><paramsdesc>- **model** (`PreTrainedModel`) --
  The model to export.
- **config** ([OnnxConfig](/docs/optimum/v0.0.1/en/onnx/package_reference/configuration#optimum.exporters.onnx.OnnxConfig)) --
  The ONNX configuration associated with the exported model.
- **opset** (`int`) --
  The version of the ONNX operator set to use.
- **output** (`Path`) --
  Path to save the exported ONNX file to.
- **device** (`str`, defaults to `"cpu"`) --
  The device on which the ONNX model will be exported. Either `cpu` or `cuda`. Only PyTorch is supported for
  export on CUDA devices.
- **input_shapes** (`Optional[Dict]`, defaults to `None`) --
  If specified, allows to use specific shapes for the example input provided to the ONNX exporter.
- **no_dynamic_axes** (bool, defaults to `False`) --
  If True, disables the use of dynamic axes during ONNX export.
- **do_constant_folding** (bool, defaults to `True`) --
  PyTorch-specific argument. If `True`, the PyTorch ONNX export will fold constants into adjacent nodes, if possible.
- **model_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Experimental usage: keyword arguments to pass to the model during
  the export. This argument should be used along the `custom_onnx_config` argument
  in case, for example, the model inputs/outputs are changed (for example, if
  `model_kwargs={"output_attentions": True}` is passed).</paramsdesc><paramgroups>0</paramgroups><rettype>`Tuple[List[str], List[str]]`</rettype><retdesc>A tuple with an ordered list of the model's inputs, and the named outputs from
the ONNX configuration.</retdesc></docstring>
Exports a PyTorch model to an ONNX Intermediate Representation.








</div>

## Utility functions[[optimum.exporters.utils.check_dummy_inputs_are_allowed]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.utils.check_dummy_inputs_are_allowed</name><anchor>optimum.exporters.utils.check_dummy_inputs_are_allowed</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/utils.py#L621</source><parameters>[{"name": "model", "val": ": typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('ModelMixin')]"}, {"name": "dummy_input_names", "val": ": typing.Iterable[str]"}]</parameters><paramsdesc>- **model** (`PreTrainedModel` or `ModelMixin`) --
  The model instance.
- **model_inputs** (`Iterable[str]`) --
  The model input names.</paramsdesc><paramgroups>0</paramgroups></docstring>

Checks that the dummy inputs from the ONNX config is a subset of the allowed inputs for `model`.



</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.exporters.onnx.validate_model_outputs</name><anchor>optimum.exporters.onnx.validate_model_outputs</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/exporters/onnx/convert.py#L166</source><parameters>[{"name": "config", "val": ": OnnxConfig"}, {"name": "reference_model", "val": ": PreTrainedModel | ModelMixin"}, {"name": "onnx_model", "val": ": Path"}, {"name": "onnx_named_outputs", "val": ": list[str]"}, {"name": "atol", "val": ": float | None = None"}, {"name": "input_shapes", "val": ": dict | None = None"}, {"name": "device", "val": ": str = 'cpu'"}, {"name": "use_subprocess", "val": ": bool | None = True"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}]</parameters><paramsdesc>- **config** (`~OnnxConfig`) --
  The configuration used to export the model.
- **reference_model** (`Union["PreTrainedModel", "ModelMixin"]`) --
  The model used for the export.
- **onnx_model** (`Path`) --
  The path to the exported model.
- **onnx_named_outputs** (`List[str]`) --
  The names of the outputs to check.
- **atol** (`Optional[float]`, defaults to `None`) --
  The absolute tolerance in terms of outputs difference between the reference and the exported model.
- **input_shapes** (`Optional[Dict]`, defaults to `None`) --
  If specified, allows to use specific shapes to validate the ONNX model on.
- **device** (`str`, defaults to `"cpu"`) --
  The device on which the ONNX model will be validated. Either `cpu` or `cuda`. Validation on a CUDA device is supported only for PyTorch.
- **use_subprocess** (`Optional[bool]`, defaults to `True`) --
  Launch validation of each exported model in a subprocess.
- **model_kwargs** (`Optional[Dict[str, Any]]`, defaults to `None`) --
  Experimental usage: keyword arguments to pass to the model during
  the export and validation.</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- If the outputs shapes or values do not match between the reference and the exported model.</raises><raisederrors>``ValueError``</raisederrors></docstring>
Validates the export by checking that the outputs from both the reference and the exported model match.








</div>

### Overview
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/overview.md

# Overview

🤗 Optimum provides an integration with ONNX Runtime, a cross-platform, high performance engine for Open Neural Network Exchange (ONNX) models.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/pipelines"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum to solve real-world problems.</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/onnx"
      ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
      <p class="text-gray-700">High-level explanations for building a better understanding about important topics such as quantization and graph optimization.</p>
   </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/modeling"
      ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
      <p class="text-gray-700">Technical descriptions of how the ONNX Runtime classes and methods of 🤗 Optimum work.</p>
    </a>
  </div>
</div>

### Quickstart
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/quickstart.md

# Quickstart

At its core, 🤗 Optimum uses _configuration objects_ to define parameters for optimization on different accelerators. These objects are then used to instantiate dedicated _optimizers_, _quantizers_, and _pruners_.

Before applying quantization or optimization, we first need to export our model to the ONNX format.

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer

>>> model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_directory = "tmp/onnx/"
>>> # Load a model from transformers and export it to ONNX
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>> # Save the onnx model and tokenizer
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
```

Let's see now how we can apply dynamic quantization with ONNX Runtime:

```python
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig
>>> from optimum.onnxruntime import ORTQuantizer
>>> # Define the quantization methodology
>>> qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)
>>> # Apply dynamic quantization on the model
>>> quantizer.quantize(save_dir=save_directory, quantization_config=qconfig)
```

In this example, we've quantized a model from the Hugging Face Hub, but it could also be a path to a local model directory. The result from applying the `quantize()` method is a `model_quantized.onnx` file that can be used to run inference. Here's an example of how to load an ONNX Runtime model and generate predictions with it:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import pipeline, AutoTokenizer
>>> model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model_quantized.onnx")
>>> tokenizer = AutoTokenizer.from_pretrained(save_directory)
>>> cls_pipeline = pipeline("text-classification", model=model, tokenizer=tokenizer)
>>> results = cls_pipeline("I love burritos!")
```

Similarly, you can apply static quantization by simply setting `is_static` to `True` when instantiating the `QuantizationConfig` object:

```python
>>> qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False)
```

Static quantization relies on feeding batches of data through the model to estimate the activation quantization parameters ahead of inference time. To support this, 🤗 Optimum allows you to provide a _calibration dataset_. The calibration dataset can be a simple `Dataset` object from the 🤗 Datasets library, or any dataset that's hosted on the Hugging Face Hub. For this example, we'll pick the [`sst2`](https://huggingface.co/datasets/glue/viewer/sst2/test) dataset that the model was originally trained on:

```python
>>> from functools import partial
>>> from optimum.onnxruntime.configuration import AutoCalibrationConfig

# Define the processing function to apply to each example after loading the dataset
>>> def preprocess_fn(ex, tokenizer):
...     return tokenizer(ex["sentence"])

>>> # Create the calibration dataset
>>> calibration_dataset = quantizer.get_calibration_dataset(
...     "glue",
...     dataset_config_name="sst2",
...     preprocess_function=partial(preprocess_fn, tokenizer=tokenizer),
...     num_samples=50,
...     dataset_split="train",
... )
>>> # Create the calibration configuration containing the parameters related to calibration.
>>> calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)
>>> # Perform the calibration step: computes the activations quantization ranges
>>> ranges = quantizer.fit(
...     dataset=calibration_dataset,
...     calibration_config=calibration_config,
...     operators_to_quantize=qconfig.operators_to_quantize,
... )
>>> # Apply static quantization on the model
>>> quantizer.quantize(
...     save_dir=save_directory,
...     calibration_tensors_range=ranges,
...     quantization_config=qconfig,
... )
```

As a final example, let's take a look at applying _graph optimizations_ techniques such as operator fusion and constant folding. As before, we load a configuration object, but this time by setting the optimization level instead of the quantization approach:

```python
>>> from optimum.onnxruntime.configuration import OptimizationConfig

>>> # Here the optimization level is selected to be 1, enabling basic optimizations such as redundant node eliminations and constant folding. Higher optimization level will result in a hardware dependent optimized graph.
>>> optimization_config = OptimizationConfig(optimization_level=1)
```

Next, we load an _optimizer_ to apply these optimisations to our model:

```python
>>> from optimum.onnxruntime import ORTOptimizer

>>> optimizer = ORTOptimizer.from_pretrained(ort_model)

>>> # Optimize the model
>>> optimizer.optimize(save_dir=save_directory, optimization_config=optimization_config)
```

And that's it - the model is now optimized and ready for inference! As you can see, the process is similar in each case:

1. Define the optimization / quantization strategies via an `OptimizationConfig` / `QuantizationConfig` object
2. Instantiate a `ORTQuantizer` or `ORTOptimizer` class
3. Apply the `quantize()` or `optimize()` method
4. Run inference

Check out the [`examples`](https://github.com/huggingface/optimum/tree/main/examples) directory for more sophisticated usage.

Happy optimising 🤗!

### Accelerated inference on AMD GPUs supported by ROCm
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/amdgpu.md

# Accelerated inference on AMD GPUs supported by ROCm

By default, ONNX Runtime runs inference on CPU devices. However, it is possible to place supported operations on an AMD Instinct GPU, while leaving any unsupported ones on CPU. In most cases, this allows costly operations to be placed on GPU and significantly accelerate inference.

Our testing involved AMD Instinct GPUs, and for specific GPU compatibility, please refer to the official support list of GPUs available [here](https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html).

This guide will show you how to run inference on the `ROCMExecutionProvider` execution provider that ONNX Runtime supports for AMD GPUs.

## Installation
The following setup installs the ONNX Runtime support with ROCM Execution Provider with ROCm 6.0. 

#### 1 ROCm Installation

Refer to the [ROCm installation guide](https://rocm.docs.amd.com/en/latest/deploy/linux/index.html) to install ROCm 6.0.

#### 2 Installing `onnxruntime-rocm`

Please use the provided [Dockerfile](https://github.com/huggingface/optimum-amd/blob/main/docker/onnx-runtime-amd-gpu/Dockerfile) example or do a local installation from source since pip wheels are currently unavailable.

**Docker Installation:**

```bash
docker build -f Dockerfile -t ort/rocm .
```

**Local Installation Steps:**

##### 2.1 PyTorch with ROCm Support
Optimum ONNX Runtime integration relies on some functionalities of Transformers that require PyTorch. For now, we recommend to use Pytorch compiled against RoCm 6.0, that can be installed following [PyTorch installation guide](https://pytorch.org/get-started/locally/): 

```bash
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0
# Use 'rocm/pytorch:rocm6.0.2_ubuntu22.04_py3.10_pytorch_2.1.2' as the preferred base image when using Docker for PyTorch installation.
```

##### 2.2 ONNX Runtime with ROCm Execution Provider

```bash
# pre-requisites
pip install -U pip
pip install cmake onnx
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Install ONNXRuntime from source
git clone --single-branch --branch main --recursive https://github.com/Microsoft/onnxruntime onnxruntime
cd onnxruntime

./build.sh --config Release --build_wheel --allow_running_as_root --update --build --parallel --cmake_extra_defines CMAKE_HIP_ARCHITECTURES=gfx90a,gfx942 ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) --use_rocm --rocm_home=/opt/rocm
pip install build/Linux/Release/dist/*
```
Note: The instructions build ORT for `MI210/MI250/MI300` gpus. To support other architectures, please update the `CMAKE_HIP_ARCHITECTURES` in the build command.

<Tip>
To avoid conflicts between `onnxruntime` and `onnxruntime-rocm`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing `onnxruntime-rocm`.
</Tip>

### Checking the ROCm installation is successful

Before going further, run the following sample code to check whether the install was successful:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...   "philschmid/tiny-bert-sst2-distilled",
...   export=True,
...   provider="ROCMExecutionProvider",
... )

>>> tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
>>> inputs = tokenizer("expectations were low, actual enjoyment was high", return_tensors="pt", padding=True)

>>> outputs = ort_model(**inputs)
>>> assert ort_model.providers == ["ROCMExecutionProvider", "CPUExecutionProvider"]
```

In case this code runs gracefully, congratulations, the installation is successful! If you encounter the following error or similar,

```
ValueError: Asked to use ROCMExecutionProvider as an ONNX Runtime execution provider, but the available execution providers are ['CPUExecutionProvider'].
```

then something is wrong with the ROCM or ONNX Runtime installation.

## Use ROCM Execution Provider with ORT models

For ORT models, the use is straightforward. Simply specify the `provider` argument in the `ORTModel.from_pretrained()` method. Here's an example:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...   "distilbert-base-uncased-finetuned-sst-2-english",
...   export=True,
...   provider="ROCMExecutionProvider",
... )
```

The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines).
When using Transformers pipeline, note that the `device` argument should be set to perform pre- and post-processing on GPU, following the example below:

```python
>>> from optimum.onnxruntime import pipeline
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")

>>> pipe = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0")
>>> result = pipe("Both the music and visual were astounding, not to mention the actors performance.")
>>> print(result)
# printing: [{'label': 'POSITIVE', 'score': 0.9997727274894c714}]
```

Additionally, you can pass the session option `log_severity_level = 0` (verbose), to check whether all nodes are indeed placed on the ROCM execution provider or not:

```python
>>> import onnxruntime

>>> session_options = onnxruntime.SessionOptions()
>>> session_options.log_severity_level = 0

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "distilbert-base-uncased-finetuned-sst-2-english",
...     export=True,
...     provider="ROCMExecutionProvider",
...     session_options=session_options
... )
```

## Observed time gains

Coming soon!

### Quantization
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/quantization.md

# Quantization

🤗 Optimum provides an `optimum.onnxruntime` package that enables you to apply quantization on many models hosted on
the Hugging Face Hub using the [ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/quantization/README.md)
quantization tool.

The quantization process is abstracted via the [ORTConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.ORTConfig) and
the [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) classes. The former allows you to specify how quantization should be done,
while the latter effectively handles quantization.

<Tip>

You can read the [conceptual guide on quantization](../../concept_guides/quantization) to learn about quantization. It
explains the main concepts that you will be using when performing quantization with the
[ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer).

</Tip>

## Quantizing a model to be used with Optimum's CLI

The Optimum ONNX Runtime quantization tool can be used through Optimum command-line interface:

```bash
optimum-cli onnxruntime quantize --help
usage: optimum-cli <command> [<args>] onnxruntime quantize [-h] --onnx_model ONNX_MODEL -o OUTPUT [--per_channel] (--arm64 | --avx2 | --avx512 | --avx512_vnni | --tensorrt | -c CONFIG)

options:
  -h, --help            show this help message and exit
  --arm64               Quantization for the ARM64 architecture.
  --avx2                Quantization with AVX-2 instructions.
  --avx512              Quantization with AVX-512 instructions.
  --avx512_vnni         Quantization with AVX-512 and VNNI instructions.
  --tensorrt            Quantization for NVIDIA TensorRT optimizer.
  -c CONFIG, --config CONFIG
                        `ORTConfig` file to use to optimize the model.

Required arguments:
  --onnx_model ONNX_MODEL
                        Path to the repository where the ONNX models to quantize are located.
  -o OUTPUT, --output OUTPUT
                        Path to the directory where to store generated ONNX model.

Optional arguments:
  --per_channel         Compute the quantization parameters on a per-channel basis.

```

Quantizing an ONNX model can be done as follows:

```bash
 optimum-cli onnxruntime quantize --onnx_model onnx_model_location/ --avx512 -o quantized_model/
```

This quantize all the ONNX files in `onnx_model_location` with the AVX-512 instructions.

## Creating an `ORTQuantizer`

The [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) class is used to quantize your ONNX model. The class can be initialized using
the `from_pretrained()` method, which supports different checkpoint formats.

1. Using an already initialized `ORTModelForXXX` class.

```python
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification

# Loading ONNX Model from the Hub
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "optimum/distilbert-base-uncased-finetuned-sst-2-english"
... )

# Create a quantizer from a ORTModelForXXX
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)
```

2. Using a local ONNX model from a directory.

```python
>>> from optimum.onnxruntime import ORTQuantizer

# This assumes a model.onnx exists in path/to/model
>>> quantizer = ORTQuantizer.from_pretrained("path/to/model")
```


## Apply Dynamic Quantization

The [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) class can be used to quantize dynamically your ONNX model. Below you will
find an easy end-to-end example on how to quantize dynamically
[distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).

```python
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig

# Load PyTorch model and convert to ONNX
>>> onnx_model = ORTModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english", export=True)

# Create quantizer
>>> quantizer = ORTQuantizer.from_pretrained(onnx_model)

# Define the quantization strategy by creating the appropriate configuration
>>> dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)

# Quantize the model
>>> model_quantized_path = quantizer.quantize(
...     save_dir="path/to/output/model",
...     quantization_config=dqconfig,
... )
```

## Static Quantization example

The [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) class can be used to quantize statically your ONNX model. Below you will find
an easy end-to-end example on how to quantize statically
[distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).

```python
>>> from functools import partial
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig, AutoCalibrationConfig

>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"

# Load PyTorch model and convert to ONNX and create Quantizer and setup config
>>> onnx_model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> quantizer = ORTQuantizer.from_pretrained(onnx_model)
>>> qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False)

# Create the calibration dataset
>>> def preprocess_fn(ex, tokenizer):
...     return tokenizer(ex["sentence"])

>>> calibration_dataset = quantizer.get_calibration_dataset(
...     "glue",
...     dataset_config_name="sst2",
...     preprocess_function=partial(preprocess_fn, tokenizer=tokenizer),
...     num_samples=50,
...     dataset_split="train",
... )

# Create the calibration configuration containing the parameters related to calibration.
>>> calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)

# Perform the calibration step: computes the activations quantization ranges
>>> ranges = quantizer.fit(
...     dataset=calibration_dataset,
...     calibration_config=calibration_config,
...     operators_to_quantize=qconfig.operators_to_quantize,
... )

# Apply static quantization on the model
>>> model_quantized_path = quantizer.quantize(
...     save_dir="path/to/output/model",
...     calibration_tensors_range=ranges,
...     quantization_config=qconfig,
... )
```

## Quantize Seq2Seq models

The [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) class currently doesn't support multi-file models, like
[ORTModelForSeq2SeqLM](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModelForSeq2SeqLM). If you want to quantize a Seq2Seq model, you have to quantize each
model's component individually.

<Tip warning={true}>

Currently, only dynamic quantization is supported for Seq2Seq models.

</Tip>

1. Load seq2seq model as `ORTModelForSeq2SeqLM`.

```python
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSeq2SeqLM
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig

# load Seq2Seq model and set model file directory
>>> model_id = "optimum/t5-small"
>>> onnx_model = ORTModelForSeq2SeqLM.from_pretrained(model_id)
>>> model_dir = onnx_model.model_save_dir
```

2. Define Quantizer for encoder, decoder and decoder with past keys

```python
# Create encoder quantizer
>>> encoder_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="encoder_model.onnx")

# Create decoder quantizer
>>> decoder_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="decoder_model.onnx")

# Create decoder with past key values quantizer
>>> decoder_wp_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="decoder_with_past_model.onnx")

# Create Quantizer list
>>> quantizer = [encoder_quantizer, decoder_quantizer, decoder_wp_quantizer]
```

3. Quantize all models

```python
# Define the quantization strategy by creating the appropriate configuration
>>> dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)

# Quantize the models individually
>>> for q in quantizer:
...     q.quantize(save_dir=".",quantization_config=dqconfig)  # doctest: +IGNORE_RESULT
```

### Optimum Inference with ONNX Runtime
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/models.md

# Optimum Inference with ONNX Runtime

Optimum is a utility package for building and running inference with accelerated runtime like ONNX Runtime.
Optimum can be used to load optimized models from the [Hugging Face Hub](hf.co/models) and create pipelines
to run accelerated inference without rewriting your APIs.


## Loading

### Transformers models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing `AutoModelForXxx` with the corresponding `ORTModelForXxx` class.

```diff
  from transformers import AutoTokenizer, pipeline
- from transformers import AutoModelForCausalLM
+ from optimum.onnxruntime import ORTModelForCausalLM

- model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B") # PyTorch checkpoint
+ model = ORTModelForCausalLM.from_pretrained("onnx-community/Llama-3.2-1B", subfolder="onnx") # ONNX checkpoint
  tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")

  pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
  result = pipe("He never went out without a book under his arm")
```

More information for all the supported `ORTModelForXxx` in our [documentation](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling)

### Transformers pipelines

You can also load your ONNX model using ONNX Runtime pipelines which replace `transformers.pipeline` with `optimum.onnxruntime.pipeline`.

```diff
- from transformers import pipeline
+ from optimum.onnxruntime import pipeline

  model_id = "distilbert-base-uncased-finetuned-sst-2-english"
  nlp_pipeline = pipeline("sentiment-analysis", model=model_id)
  result = nlp_pipeline("I've been waiting for a HuggingFace course my whole life.")
```

More information for all the supported `ORTXxxPipeline` in our [documentation](https://huggingface.co/docs/optimum/onnxruntime/package_reference/pipelines)

### Diffusers models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing `DiffusionPipeline` with the corresponding `ORTDiffusionPipeline` class.


```diff
- from diffusers import DiffusionPipeline
+ from optimum.onnxruntime import ORTDiffusionPipeline

  model_id = "runwayml/stable-diffusion-v1-5"
- pipeline = DiffusionPipeline.from_pretrained(model_id)
+ pipeline = ORTDiffusionPipeline.from_pretrained(model_id, export=True)
  prompt = "sailing ship in storm by Leonardo da Vinci"
  image = pipeline(prompt).images[0]
```

More information for all the supported `ORTXxxPipeline` in our [documentation](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_diffusion)


### Sentence Transformers models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing `AutoModel` with the corresponding `ORTModelForFeatureExtraction` class.

```diff
  from transformers import AutoTokenizer
- from transformers import AutoModel
+ from optimum.onnxruntime import ORTModelForFeatureExtraction

  tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
- model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
+ model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")
  inputs = tokenizer("This is an example sentence", return_tensors="pt")
  outputs = model(**inputs)
```

You can also load your ONNX model directly using the [`sentence_transformers.SentenceTransformer`](https://sbert.net/docs/sentence_transformer/usage/efficiency.html#onnx) class, just make sure to have `sentence-transformers>=3.2` installed. If the model wasn't already converted to ONNX, it will be converted automatically on-the-fly.

```diff
  from sentence_transformers import SentenceTransformer

  model_id = "sentence-transformers/all-MiniLM-L6-v2"
- model = SentenceTransformer(model_id)
+ model = SentenceTransformer(model_id, backend="onnx")

  sentences = ["This is an example sentence", "Each sentence is converted"]
  embeddings = model.encode(sentences)
```


### Timm models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `create_model` with the corresponding `ORTModelForImageClassification` class.


```diff
  import requests
  from PIL import Image
- from timm import create_model
  from timm.data import resolve_data_config, create_transform
+ from optimum.onnxruntime import ORTModelForImageClassification

- model = create_model("timm/mobilenetv3_large_100.ra_in1k", pretrained=True)
+ model = ORTModelForImageClassification.from_pretrained("optimum/mobilenetv3_large_100.ra_in1k")
  transform = create_transform(**resolve_data_config(model.config.pretrained_cfg, model=model))
  url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png"
  image = Image.open(requests.get(url, stream=True).raw)
  inputs = transform(image).unsqueeze(0)
  outputs = model(inputs)
```



## Converting your model to ONNX on-the-fly

In case your model wasn't already [converted to ONNX](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel) includes a method to convert your model to ONNX on-the-fly.
Simply pass `export=True` to the [from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method, and your model will be loaded and converted to ONNX on-the-fly:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> # Load the model from the hub and export it to the ONNX format
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)
```


## Pushing your model to the Hub

You can also call `push_to_hub` directly on your model to upload it to the [Hub](https://hf.co/models).

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> # Load the model from the hub and export it to the ONNX format
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

>>> # Save the converted model locally
>>> output_dir = "a_local_path_for_convert_onnx_model"
>>> model.save_pretrained(output_dir)

# Push the onnx model to HF Hub
>>> model.push_to_hub(output_dir, repository_id="my-onnx-repo")
```

### Inference pipelines with the ONNX Runtime accelerator
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/pipelines.md

# Inference pipelines with the ONNX Runtime accelerator

The [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function makes it simple to use models from the [Model Hub](https://huggingface.co/models)
for accelerated inference on a variety of tasks such as text classification, question answering and image classification.

ONNX Runtime pipelines are a drop-in replacement for [Transformers pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines) that automatically use ONNX/ONNX Runtime as the backend for model inference. 
This means you get:

- **Faster inference**: ONNX Runtime's optimized execution engine provides significant speedups
- **Cross-platform support**: Works across different hardware accelerators (CPU, GPU, etc.)
- **Same API**: Identical interface to transformers pipelines - no code changes needed
- **Automatic model loading**: Seamlessly loads or exports ONNX models

<Tip>

You can also use the
[pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines) function from
Transformers and provide your Optimum model and tokenizer/feature-extractor to it.

```python
>>> from transformers import pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model = ORTModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
>>> onnx_pipeline = pipeline("text-classification", model=model, tokenizer=tokenizer)
>>> onnx_pipeline("I love you")
[{'label': 'POSITIVE', 'score': 0.999870002746582}]
```

</Tip>

Currently the supported tasks are:

* `audio-classification`: Classify audio inputs into predefined categories.
* `automatic-speech-recognition`: Convert spoken language into text.
* `feature-extraction`: Extract features from text or images using pre-trained models.
* `fill-mask`: Predict missing words in a sentence.
* `image-classification`: Classify images into predefined categories.
* `image-segmentation`: Segment images into different regions based on their content.
* `image-to-image`: Transform images from one domain to another (e.g., style transfer).
* `image-to-text`: Generate textual descriptions for images.
* `question-answering`: Answer questions based on a given context.
* `summarization`: Generate concise summaries of longer text documents.
* `text2text-generation`: Generate text based on a given input text.
* `text-classification`: Classify text into predefined categories (e.g., sentiment analysis).
* `text-generation`: Generate text based on a given prompt.
* `token-classification`: Classify individual tokens in a text (e.g., named entity recognition).
* `translation`: Translate text from one language to another.
* `zero-shot-classification`: Classify text into categories without prior training on those categories.

## ONNX Runtime pipeline usage

While each task has an associated pipeline class, it is simpler to use the general [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function which wraps all the task-specific pipelines in one object.
The [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function automatically loads a default model and tokenizer/feature-extractor capable of performing inference for your task.

1. Start by creating a pipeline by specifying an inference task:

```python
>>> from optimum.onnxruntime import pipeline

>>> classifier = pipeline(task="text-classification")
```

2. Pass your input text/image to the [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function:

```python
>>> classifier("I like you. I love you.")
[{'label': 'POSITIVE', 'score': 0.9998838901519775}]
```

_Note: The default models used in the [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function are not optimized for inference or quantized, so there won't be a performance improvement compared to their PyTorch counterparts._

### Using vanilla Transformers model and converting to ONNX

The [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function accepts any supported model from the [Hugging Face Hub](https://huggingface.co/models).
There are tags on the Model Hub that allow you to filter for a model you'd like to use for your task.

<Tip>

To be able to load the model with the ONNX Runtime backend, the export to ONNX needs to be supported for the considered architecture.

You can check the list of supported architectures [here](https://huggingface.co/docs/optimum/exporters/onnx/overview#overview).

</Tip>

Once you have picked an appropriate model, you can create the [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) by specifying the model repo:

```python
>>> from optimum.onnxruntime import pipeline

# The model will be loaded to an ORTModelForQuestionAnswering.
>>> onnx_qa = pipeline("question-answering", model="deepset/roberta-base-squad2")
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = onnx_qa(question=question, context=context)
```

It is also possible to load it with the `from_pretrained(model_name_or_path, export=True)`
method associated with the `ORTModelForXXX` class.

For example, here is how you can load the [ORTModelForQuestionAnswering](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModelForQuestionAnswering) class for question answering:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> from optimum.onnxruntime import pipeline

>>> tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")

>>> # Loading the PyTorch checkpoint and converting to the ONNX format by providing
>>> # export=True
>>> model = ORTModelForQuestionAnswering.from_pretrained(
...     "deepset/roberta-base-squad2",
...     export=True
... )

>>> onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = onnx_qa(question=question, context=context)
```

### Using Optimum models

The [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function is tightly integrated with the [Hugging Face Hub](https://huggingface.co/models) and can load ONNX models directly.

```python
>>> from optimum.onnxruntime import pipeline

>>> onnx_qa = pipeline("question-answering", model="optimum/roberta-base-squad2")
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = onnx_qa(question=question, context=context)
```

It is also possible to load it with the `from_pretrained(model_name_or_path)`
method associated with the `ORTModelForXXX` class.

For example, here is how you can load the [ORTModelForQuestionAnswering](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModelForQuestionAnswering) class for question answering:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> from optimum.onnxruntime import pipeline

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")

>>> # Loading directly an ONNX model from a model repo.
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")

>>> onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> question = "What's my name?"
>>> context = "My name is Philipp and I live in Nuremberg."

>>> pred = onnx_qa(question=question, context=context)
```


## Optimizing and quantizing in pipelines

The [pipeline()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/pipelines#optimum.onnxruntime.pipeline) function can not only run inference on vanilla ONNX Runtime checkpoints - you can also use
checkpoints optimized with the [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) and the [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer).

Below you can find two examples of how you could use the [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer) and the
[ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) to optimize/quantize your model and use it for inference afterwards.

### Quantizing with the `ORTQuantizer`

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import (
...     AutoQuantizationConfig,
...     ORTModelForSequenceClassification,
...     ORTQuantizer
... )
>>> from optimum.onnxruntime import pipeline

>>> # Load the tokenizer and export the model to the ONNX format
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_dir = "distilbert_quantized"

>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

>>> # Load the quantization configuration detailing the quantization we wish to apply
>>> qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=True)
>>> quantizer = ORTQuantizer.from_pretrained(model)

>>> # Apply dynamic quantization and save the resulting model
>>> quantizer.quantize(save_dir=save_dir, quantization_config=qconfig)
>>> # Load the quantized model from a local repository
>>> model = ORTModelForSequenceClassification.from_pretrained(save_dir)

>>> # Create the transformers pipeline
>>> onnx_clx = pipeline("text-classification", model=model)
>>> text = "I like the new ORT pipeline"
>>> pred = onnx_clx(text)
>>> print(pred)
>>> # [{'label': 'POSITIVE', 'score': 0.9974810481071472}]

>>> # Save and push the model to the hub (in practice save_dir could be used here instead)
>>> model.save_pretrained("new_path_for_directory")
>>> model.push_to_hub("new_path_for_directory", repository_id="my-onnx-repo", use_auth_token=True)
```

### Optimizing with `ORTOptimizer`

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import (
...     AutoOptimizationConfig,
...     ORTModelForSequenceClassification,
...     ORTOptimizer
... )
>>> from optimum.onnxruntime.configuration import OptimizationConfig
>>> from optimum.onnxruntime import pipeline

>>> # Load the tokenizer and export the model to the ONNX format
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_dir = "distilbert_optimized"

>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

>>> # Load the optimization configuration detailing the optimization we wish to apply
>>> optimization_config = AutoOptimizationConfig.O3()
>>> optimizer = ORTOptimizer.from_pretrained(model)

>>> optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)
# Load the optimized model from a local repository
>>> model = ORTModelForSequenceClassification.from_pretrained(save_dir)

# Create the transformers pipeline
>>> onnx_clx = pipeline("text-classification", model=model)
>>> text = "I like the new ORT pipeline"
>>> pred = onnx_clx(text)
>>> print(pred)
>>> # [{'label': 'POSITIVE', 'score': 0.9973127245903015}]

# Save and push the model to the hub
>>> tokenizer.save_pretrained("new_path_for_directory")
>>> model.save_pretrained("new_path_for_directory")
>>> model.push_to_hub("new_path_for_directory", repository_id="my-onnx-repo", use_auth_token=True)
```

### Accelerated inference on NVIDIA GPUs
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/gpu.md

# Accelerated inference on NVIDIA GPUs

By default, ONNX Runtime runs inference on CPU devices. However, it is possible to place supported operations on an NVIDIA GPU, while leaving any unsupported ones on CPU. In most cases, this allows costly operations to be placed on GPU and significantly accelerate inference.

This guide will show you how to run inference on two execution providers that ONNX Runtime supports for NVIDIA GPUs:

* `CUDAExecutionProvider`: Generic acceleration on NVIDIA CUDA-enabled GPUs.
* `TensorrtExecutionProvider`: Uses NVIDIA’s [TensorRT](https://developer.nvidia.com/tensorrt) inference engine and generally provides the best runtime performance.

<Tip warning={true}>

Due to a limitation of ONNX Runtime, it is not possible to run quantized models on `CUDAExecutionProvider` and only models with static quantization can be run on `TensorrtExecutionProvider`.

</Tip>

## CUDAExecutionProvider

### CUDA installation

Provided the CUDA and cuDNN [requirements](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) are satisfied, install the additional dependencies by running

```bash
pip install optimum[onnxruntime-gpu]
```

To avoid conflicts between `onnxruntime` and `onnxruntime-gpu`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing Optimum.

### Checking the CUDA installation is successful

Before going further, run the following sample code to check whether the install was successful:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...   "philschmid/tiny-bert-sst2-distilled",
...   export=True,
...   provider="CUDAExecutionProvider",
... )

>>> tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
>>> inputs = tokenizer("expectations were low, actual enjoyment was high", return_tensors="pt", padding=True)

>>> outputs = ort_model(**inputs)
>>> assert ort_model.providers == ["CUDAExecutionProvider", "CPUExecutionProvider"]
```

In case this code runs gracefully, congratulations, the installation is successful! If you encounter the following error or similar,

```
ValueError: Asked to use CUDAExecutionProvider as an ONNX Runtime execution provider, but the available execution providers are ['CPUExecutionProvider'].
```

then something is wrong with the CUDA or ONNX Runtime installation.

### Use CUDA execution provider with floating-point models

For non-quantized models, the use is straightforward. Simply specify the `provider` argument in the `ORTModel.from_pretrained()` method. Here's an example:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...   "distilbert-base-uncased-finetuned-sst-2-english",
...   export=True,
...   provider="CUDAExecutionProvider",
... )
```

The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines).
When using Transformers pipeline, note that the `device` argument should be set to perform pre- and post-processing on GPU, following the example below:

```python
>>> from optimum.onnxruntime import pipeline
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")

>>> pipe = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0")
>>> result = pipe("Both the music and visual were astounding, not to mention the actors performance.")
>>> print(result)
# printing: [{'label': 'POSITIVE', 'score': 0.9997727274894714}]
```

Additionally, you can pass the session option `log_severity_level = 0` (verbose), to check whether all nodes are indeed placed on the CUDA execution provider or not:

```python
>>> import onnxruntime

>>> session_options = onnxruntime.SessionOptions()
>>> session_options.log_severity_level = 0

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "distilbert-base-uncased-finetuned-sst-2-english",
...     export=True,
...     provider="CUDAExecutionProvider",
...     session_options=session_options
... )
```

You should see the following logs:

```
2022-10-18 14:59:13.728886041 [V:onnxruntime:, session_state.cc:1193 VerifyEachN
odeIsAssignedToAnEp]  Provider: [CPUExecutionProvider]: [Gather (Gather_76), Uns
queeze (Unsqueeze_78), Gather (Gather_97), Gather (Gather_100), Concat (Concat_1
10), Unsqueeze (Unsqueeze_125), ...]
2022-10-18 14:59:13.728906431 [V:onnxruntime:, session_state.cc:1193 VerifyEachN
odeIsAssignedToAnEp]  Provider: [CUDAExecutionProvider]: [Shape (Shape_74), Slic
e (Slice_80), Gather (Gather_81), Gather (Gather_82), Add (Add_83), Shape (Shape
_95), MatMul (MatMul_101), ...]
```

In this example, we can see that all the costly MatMul operations are placed on the CUDA execution provider.

### Use CUDA execution provider with quantized models

Due to current limitations in ONNX Runtime, it is not possible to use quantized models with `CUDAExecutionProvider`. The reasons are as follows:

* When using [🤗 Optimum dynamic quantization](quantization#dynamic-quantization-example), nodes as [`MatMulInteger`](https://github.com/onnx/onnx/blob/v1.12.0/docs/Operators.md#MatMulInteger), [`DynamicQuantizeLinear`](https://github.com/onnx/onnx/blob/v1.12.0/docs/Operators.md#DynamicQuantizeLinear) may be inserted in the ONNX graph, that cannot be consumed by the CUDA execution provider.

* When using [static quantization](quantization#static-quantization-example), the ONNX computation graph will contain matrix multiplications and convolutions in floating-point arithmetic, along with Quantize + Dequantize operations to simulate quantization. In this case, although the costly matrix multiplications and convolutions will be run on the GPU, they will use floating-point arithmetic as the `CUDAExecutionProvider` can not consume the Quantize + Dequantize nodes to replace them by the operations using integer arithmetic.

### Reduce memory footprint with IOBinding

[IOBinding](https://onnxruntime.ai/docs/api/python/api_summary.html#iobinding) is an efficient way to avoid expensive data copying when using GPUs. By default, ONNX Runtime will copy the input from the CPU (even if the tensors are already copied to the targeted device), and assume that outputs also need to be copied back to the CPU from GPUs after the run. These data copying overheads between the host and devices are expensive, and __can lead to worse inference latency than vanilla PyTorch__ especially for the decoding process.

To avoid the slowdown, 🤗 Optimum adopts the IOBinding to copy inputs onto GPUs and pre-allocate memory for outputs prior the inference.  When instantiating the `ORTModel`, set the value of the argument `use_io_binding` to choose whether to turn on the IOBinding during the inference. `use_io_binding` is set to `True` by default, if you choose CUDA as execution provider.

And if you want to turn off IOBinding:
```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

# Load the model from the hub and export it to the ONNX format
>>> model = ORTModelForSeq2SeqLM.from_pretrained("t5-small", export=True, use_io_binding=False)
>>> tokenizer = AutoTokenizer.from_pretrained("t5-small")

# Create a pipeline
>>> onnx_translation = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer, device="cuda:0")
```

For the time being, IOBinding is supported for task-defined ORT models, if you want us to add support for custom models, file us an issue on the Optimum's repository.

### Observed time gains

We tested three common models with a decoding process: `GPT2` / `T5-small` / `M2M100-418M`, and the benchmark was run on a versatile Tesla T4 GPU (more environment details at the end of this section).

Here are some performance results running with `CUDAExecutionProvider` when IOBinding has been turned on. We have tested input sequence length from 8 to 512, and generated outputs both with greedy search and beam search (`num_beam=5`):

<table><tr>
<td>
  <p align="center">
    <img alt="GPT2" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_gpt2.png" width="450">
    <br>
    <em style="color: grey">GPT2</em>
  </p>
</td>
<td>
  <p align="center">
    <img alt="T5-small" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_t5_s.png" width="450">
    <br>
    <em style="color: grey">T5-small</em>
  </p>
</td></tr>
<tr><td>
  <p align="center">
    <img alt="M2M100-418M" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_m2m100_418m.png" width="450">
    <br>
    <em style="color: grey">M2M100-418M</em>
  </p>
</td>
</tr></table>

And here is a summary for the saving time with different sequence lengths (32 / 128) and generation modes(greedy search / beam search) while using ONNX Runtime compared with PyTorch:

<table><tr>
<td>
  <p align="center">
    <img alt="seq32" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/inference_models_32.png" width="800">
    <br>
    <em style="color: grey">sequence length: 32</em>
  </p>
</td></tr>
<tr><td>
  <p align="center">
    <img alt="seq128" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/inference_models_128.png" width="800">
    <br>
    <em style="color: grey">sequence length: 128</em>
  </p>
</td>
</tr></table>


Environment:

```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01    Driver Version: 440.33.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   28C    P8     8W /  70W |      0MiB / 15109MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

- Platform: Linux-5.4.0-1089-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- `transformers` version: 4.24.0
- `optimum` version: 1.5.0
- PyTorch version: 1.12.0+cu113
```

Note that previous experiments are run with __vanilla ONNX__ models exported directly from the exporter. If you are interested in __further acceleration__, with `ORTOptimizer` you can optimize the graph and convert your model to FP16 if you have a GPU with mixed precision capabilities.

## TensorrtExecutionProvider

TensorRT uses its own set of optimizations, and **generally does not support the optimizations from [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer)**. We therefore recommend to use the original ONNX models when using TensorrtExecutionProvider ([reference](https://github.com/microsoft/onnxruntime/issues/10905#issuecomment-1072649358)).

### TensorRT installation

The easiest way to use TensorRT as the execution provider for models optimized through 🤗 Optimum is with the available ONNX Runtime `TensorrtExecutionProvider`.

In order to use 🤗 Optimum with TensorRT in a local environment, we recommend following the NVIDIA installation guides:
* CUDA toolkit: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
* cuDNN: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
* TensorRT: https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html

For TensorRT, we recommend the Tar File Installation method. Alternatively, TensorRT may be installable with `pip` by following [these instructions](https://github.com/microsoft/onnxruntime/issues/9986).

Once the required packages are installed, the following environment variables need to be set with the appropriate paths for ONNX Runtime to detect TensorRT installation:

```bash
export CUDA_PATH=/usr/local/cuda
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-x.x/lib64:/path/to/TensorRT-8.x.x/lib
```

### Checking the TensorRT installation is successful

Before going further, run the following sample code to check whether the install was successful:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "philschmid/tiny-bert-sst2-distilled",
...     export=True,
...     provider="TensorrtExecutionProvider",
... )

>>> tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
>>> inp = tokenizer("expectations were low, actual enjoyment was high", return_tensors="pt", padding=True)

>>> result = ort_model(**inp)
>>> assert ort_model.providers == ["TensorrtExecutionProvider", "CUDAExecutionProvider", "CPUExecutionProvider"]
```

In case this code runs gracefully, congratulations, the installation is successful!

In case the above `assert` fails, or you encounter the following warning

```
Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
```

something is wrong with the TensorRT or ONNX Runtime installation.

### TensorRT engine build and warmup

TensorRT requires to build its [inference engine](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#build-phase) ahead of inference, which takes some time due to the model optimization and nodes fusion. To avoid rebuilding the engine every time the model is loaded, ONNX Runtime provides a pair of options to save the engine: `trt_engine_cache_enable` and `trt_engine_cache_path`.

We recommend setting these two provider options when using the TensorRT execution provider. The usage is as follows, where [`optimum/gpt2`](https://huggingface.co/optimum/gpt2) is an ONNX model converted from PyTorch using the [Optimum ONNX exporter](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model):

```python
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> provider_options = {
...     "trt_engine_cache_enable": True,
...     "trt_engine_cache_path": "tmp/trt_cache_gpt2_example"
... }

# the TensorRT engine is not built here, it will be when doing inference
>>> ort_model = ORTModelForCausalLM.from_pretrained(
...     "optimum/gpt2",
...     use_cache=False,
...     provider="TensorrtExecutionProvider",
...     provider_options=provider_options
... )
```

TensorRT builds its engine depending on specified input shapes. One big issue is that building the engine can be time consuming, especially for large models. Therefore, as a workaround, one recommendation is to build the TensorRT engine with dynamic shapes. This allows to avoid rebuilding the engine for new small and large shapes, which is unwanted once the model is deployed for inference.

To do so we use the provider's options `trt_profile_min_shapes`, `trt_profile_max_shapes` and `trt_profile_opt_shapes` to specify the minimum, maximum and optimal shapes for the engine. For example, for GPT2, we can use the following shapes:

```python
provider_options = {
    "trt_profile_min_shapes": "input_ids:1x1,attention_mask:1x1,position_ids:1x1",
    "trt_profile_opt_shapes": "input_ids:1x1,attention_mask:1x1,position_ids:1x1",
    "trt_profile_max_shapes": "input_ids:1x64,attention_mask:1x64,position_ids:1x64",
}
```

Passing the engine cache path in the provider options, the engine can therefore be built once for all and used fully for inference thereafter.

For example, for text generation, the engine can be built with:

```python
>>> import os
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> os.makedirs("tmp/trt_cache_gpt2_example", exist_ok=True)
>>> provider_options = {
...     "trt_engine_cache_enable": True,
...     "trt_engine_cache_path": "tmp/trt_cache_gpt2_example",
...     "trt_profile_min_shapes": "input_ids:1x1,attention_mask:1x1,position_ids:1x1",
...     "trt_profile_opt_shapes": "input_ids:1x1,attention_mask:1x1,position_ids:1x1",
...     "trt_profile_max_shapes": "input_ids:1x64,attention_mask:1x64,position_ids:1x64",
... }

>>> ort_model = ORTModelForCausalLM.from_pretrained(
...     "optimum/gpt2",
...     use_cache=False,
...     provider="TensorrtExecutionProvider",
...     provider_options=provider_options,
... )
```

The engine is stored as:

![TensorRT engine cache folder](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/tensorrt_cache.png)

Once the engine is built, the cache can be reloaded and generation does not need to rebuild the engine:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> provider_options = {
...     "trt_engine_cache_enable": True,
...     "trt_engine_cache_path": "tmp/trt_cache_gpt2_example"
... }

>>> ort_model = ORTModelForCausalLM.from_pretrained(
...     "optimum/gpt2",
...     use_cache=False,
...     provider="TensorrtExecutionProvider",
...     provider_options=provider_options,
... )
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2")

>>> text = ["Replace me by any text you'd like."]
>>> encoded_input = tokenizer(text, return_tensors="pt").to("cuda")

>>> for i in range(3):
...     output = ort_model.generate(**encoded_input)
...     print(tokenizer.decode(output[0]))  # doctest: +IGNORE_RESULT
```

#### Warmup

Once the engine is built, it is recommended to do before inference **one or a few warmup steps**, as the first inference runs have [some overhead](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-flags).

### Use TensorRT execution provider with floating-point models

For non-quantized models, the use is straightforward, by simply using the `provider` argument in `ORTModel.from_pretrained()`. For example:

```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "distilbert-base-uncased-finetuned-sst-2-english",
...     export=True,
...     provider="TensorrtExecutionProvider",
... )
```

[As previously for `CUDAExecutionProvider`](#use-cuda-execution-provider-with-floatingpoint-models), by passing the session option `log_severity_level = 0` (verbose), we can check in the logs whether all nodes are indeed placed on the TensorRT execution provider or not:

```
2022-09-22 14:12:48.371513741 [V:onnxruntime:, session_state.cc:1188 VerifyEachNodeIsAssignedToAnEp] All nodes have been placed on [TensorrtExecutionProvider]
```

The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines).

### Use TensorRT execution provider with quantized models

When it comes to quantized models, TensorRT only supports models that use [**static** quantization](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c) with [**symmetric quantization** for weights and activations](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#intro-quantization).


🤗 Optimum provides a quantization config ready to be used with [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer) with the constraints of TensorRT quantization:

```python
>>> from optimum.onnxruntime import AutoQuantizationConfig

>>> qconfig = AutoQuantizationConfig.tensorrt(per_channel=False)
```

Using this `qconfig`, static quantization can be performed as explained in the [static quantization guide](quantization#static-quantization-example).

In the code sample below, after performing static quantization, the resulting model is loaded into the [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel) class using TensorRT as the execution provider. ONNX Runtime graph optimization needs to be disabled for the model to be consumed and optimized by TensorRT, and the fact that INT8 operations are used needs to be specified to TensorRT.

```python
>>> import onnxruntime
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> session_options = onnxruntime.SessionOptions()
>>> session_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL

>>> tokenizer = AutoTokenizer.from_pretrained("fxmarty/distilbert-base-uncased-sst2-onnx-int8-for-tensorrt")
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(
...     "fxmarty/distilbert-base-uncased-sst2-onnx-int8-for-tensorrt",
...     provider="TensorrtExecutionProvider",
...     session_options=session_options,
...     provider_options={"trt_int8_enable": True},
>>> )

>>> inp = tokenizer("TensorRT is a bit painful to use, but at the end of day it runs smoothly and blazingly fast!", return_tensors="np")

>>> res = ort_model(**inp)

>>> print(res)
>>> print(ort_model.config.id2label[res.logits[0].argmax()])
>>> # SequenceClassifierOutput(loss=None, logits=array([[-0.545066 ,  0.5609764]], dtype=float32), hidden_states=None, attentions=None)
>>> # POSITIVE
```

The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines).

### TensorRT limitations for quantized models

As highlighted in the previous section, TensorRT supports only a limited range of quantized models:
* Static quantization only
* Weights and activations quantization ranges are symmetric
* Weights need to be stored in float32 in the ONNX model, thus there is no storage space saving from quantization. TensorRT indeed requires to insert full Quantize + Dequantize pairs. Normally, weights would be stored in fixed point 8-bits format and only a `DequantizeLinear` would be applied on the weights.

In case `provider="TensorrtExecutionProvider"` is passed and the model has not been quantized strictly following these constraints, various errors may be raised, where error messages can be unclear.

### Observed time gains

Nvidia Nsight Systems tool can be used to profile the execution time on GPU. Before profiling or measuring latency/throughput, it is a good practice to do a few **warmup steps**.

Coming soon!

### Optimization
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/usage_guides/optimization.md

# Optimization

🤗 Optimum provides an `optimum.onnxruntime` package that enables you to apply graph optimization on many model hosted on the 🤗 hub using the [ONNX Runtime](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) model optimization tool.

## Optimizing a model during the ONNX export

The ONNX model can be directly optimized during the ONNX export using Optimum CLI, by passing the argument `--optimize {O1,O2,O3,O4}` in the CLI, for example:

```
optimum-cli export onnx --model gpt2 --optimize O3 gpt2_onnx/
```

The optimization levels are:
- O1: basic general optimizations.
- O2: basic and extended general optimizations, transformers-specific fusions.
- O3: same as O2 with GELU approximation.
- O4: same as O3 with mixed precision (fp16, GPU-only, requires `--device cuda`).

## Optimizing a model programmatically with `ORTOptimizer`

ONNX models can be optimized with the [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer). The class can be initialized using the [from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer.from_pretrained) method, which supports different checkpoint formats.

1. Using an already initialized [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel) class.

```python
>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForSequenceClassification

# Loading ONNX Model from the Hub
>>> model = ORTModelForSequenceClassification.from_pretrained(
...     "optimum/distilbert-base-uncased-finetuned-sst-2-english"
... )

# Create an optimizer from an ORTModelForXXX
>>> optimizer = ORTOptimizer.from_pretrained(model)
```

2. Using a local ONNX model from a directory.

```python
>>> from optimum.onnxruntime import ORTOptimizer

# This assumes a model.onnx exists in path/to/model
>>> optimizer = ORTOptimizer.from_pretrained("path/to/model")
```


### Optimization Configuration

The [OptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.OptimizationConfig) class allows to specify how the optimization should be performed by the [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer).

In the optimization configuration, there are 4 possible optimization levels:
- `optimization_level=0`: to disable all optimizations
- `optimization_level=1`: to enable basic optimizations such as constant folding or redundant node eliminations
- `optimization_level=2`: to enable extended graph optimizations such as node fusions
- `optimization_level=99`: to enable data layout optimizations

Choosing a level enables the optimizations of that level, as well as the optimizations of all preceding levels.
More information [here](https://onnxruntime.ai/docs/performance/graph-optimizations.html).

`enable_transformers_specific_optimizations=True` means that `transformers`-specific graph fusion and approximation are performed in addition to the ONNX Runtime optimizations described above.
Here is a list of the possible optimizations you can enable:
- Gelu fusion with `disable_gelu_fusion=False`,
- Layer Normalization fusion with `disable_layer_norm_fusion=False`,
- Attention fusion with `disable_attention_fusion=False`,
- SkipLayerNormalization fusion with `disable_skip_layer_norm_fusion=False`,
- Add Bias and SkipLayerNormalization fusion with `disable_bias_skip_layer_norm_fusion=False`,
- Add Bias and Gelu / FastGelu fusion with `disable_bias_gelu_fusion=False`,
- Gelu approximation with `enable_gelu_approximation=True`.

<Tip>

Attention fusion is designed for right-side padding for BERT-like architectures (eg. BERT, RoBERTa, VIT, etc.) and for left-side padding for generative models (GPT-like). If you are not following the convention, please set `use_raw_attention_mask=True` to avoid potential accuracy issues but sacrifice the performance.

</Tip>

While [OptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.OptimizationConfig) gives you full control on how to do optimization, it can be hard to know what to enable / disable. Instead, you can use [AutoOptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.AutoOptimizationConfig) which provides four common optimization levels:
- O1: basic general optimizations.
- O2: basic and extended general optimizations, transformers-specific fusions.
- O3: same as O2 with GELU approximation.
- O4: same as O3 with mixed precision (fp16, GPU-only).

Example: Loading a O2 [OptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.OptimizationConfig)

```python
>>> from optimum.onnxruntime import AutoOptimizationConfig
>>> optimization_config = AutoOptimizationConfig.O2()
```

You can also specify custom argument that were not defined in the O2 configuration, for instance:

```python
>>> from optimum.onnxruntime import AutoOptimizationConfig
>>> optimization_config = AutoOptimizationConfig.O2(disable_embed_layer_norm_fusion=False)
```


### Optimization examples

Below you will find an easy end-to-end example on how to optimize [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).

```python
>>> from optimum.onnxruntime import (
...     AutoOptimizationConfig, ORTOptimizer, ORTModelForSequenceClassification
... )

>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_dir = "distilbert_optimized"

>>> # Load a PyTorch model and export it to the ONNX format
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

>>> # Create the optimizer
>>> optimizer = ORTOptimizer.from_pretrained(model)

>>> # Define the optimization strategy by creating the appropriate configuration
>>> optimization_config = AutoOptimizationConfig.O2()

>>> # Optimize the model
>>> optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)
```


Below you will find an easy end-to-end example on how to optimize a Seq2Seq model [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6).

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import  OptimizationConfig, ORTOptimizer, ORTModelForSeq2SeqLM

>>> model_id = "sshleifer/distilbart-cnn-12-6"
>>> save_dir = "distilbart_optimized"

>>> # Load a PyTorch model and export it to the ONNX format
>>> model = ORTModelForSeq2SeqLM.from_pretrained(model_id, export=True)

>>> # Create the optimizer
>>> optimizer = ORTOptimizer.from_pretrained(model)

>>> # Define the optimization strategy by creating the appropriate configuration
>>> optimization_config = OptimizationConfig(
...     optimization_level=2,
...     enable_transformers_specific_optimizations=True,
...     optimize_for_gpu=False,
... )

>>> # Optimize the model
>>> optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> optimized_model = ORTModelForSeq2SeqLM.from_pretrained(save_dir)
>>> tokens = tokenizer("This is a sample input", return_tensors="pt")
>>> outputs = optimized_model.generate(**tokens)
```

## Optimizing a model with Optimum CLI

The Optimum ONNX Runtime optimization tools can be used directly through Optimum command-line interface:

```bash
optimum-cli onnxruntime optimize --help
usage: optimum-cli <command> [<args>] onnxruntime optimize [-h] --onnx_model ONNX_MODEL -o OUTPUT (-O1 | -O2 | -O3 | -O4 | -c CONFIG)

options:
  -h, --help            show this help message and exit
  -O1                   Basic general optimizations (see: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization for more details).
  -O2                   Basic and extended general optimizations, transformers-specific fusions (see: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization for more
                        details).
  -O3                   Same as O2 with Gelu approximation (see: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization for more details).
  -O4                   Same as O3 with mixed precision (see: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization for more details).
  -c CONFIG, --config CONFIG
                        `ORTConfig` file to use to optimize the model.

Required arguments:
  --onnx_model ONNX_MODEL
                        Path to the repository where the ONNX models to optimize are located.
  -o OUTPUT, --output OUTPUT
                        Path to the directory where to store generated ONNX model.
```

Optimizing an ONNX model can be done as follows:

```bash
 optimum-cli onnxruntime optimize --onnx_model onnx_model_location/ -O1 -o optimized_model/
```

This optimizes all the ONNX files in `onnx_model_location` with the basic general optimizations.

### ONNX Runtime Diffusion Pipelines
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/modeling_diffusion.md

# ONNX Runtime Diffusion Pipelines

## Generic ORT Diffusion Pipeline classes

The following classes are available for instantiating a diffusion pipeline class without needing to specify the task or architecture.

### ORTDiffusionPipeline[[optimum.onnxruntime.ORTDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTDiffusionPipeline</name><anchor>optimum.onnxruntime.ORTDiffusionPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L87</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Base class for all ONNX Runtime Pipelines.

`ORTDiffusionPipeline` stores all components (models, schedulers, and processors) for diffusion pipelines and
provides methods for exporting, loading, downloading and saving models. It also includes methods to:

- move all ONNX Runtime sessions to the device of your choice
- enable/disable the progress bar for the denoising iteration
- handle ONNX Runtime io binding if used

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.
- **task** (`str`) -- A string that identifies the pipeline's task.
- **library** (`str`) -- The library the pipeline is compatible with.
- **auto_model_class** (`Type[DiffusionPipeline]`) -- The corresponding/equivalent Diffusers pipeline class.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>optimum.onnxruntime.ORTDiffusionPipeline.from_pretrained</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L261</source><parameters>[{"name": "model_name_or_path", "val": ": str | Path"}, {"name": "export", "val": ": bool | None = None"}, {"name": "provider", "val": ": str = 'CPUExecutionProvider'"}, {"name": "providers", "val": ": Sequence[str] | None = None"}, {"name": "provider_options", "val": ": Sequence[dict[str, Any]] | dict[str, Any] | None = None"}, {"name": "session_options", "val": ": SessionOptions | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_name_or_path** (`str` or `os.PathLike`) --
  Path to a folder containing the model files or a hub repository id.
- **export** (`bool`, *optional*, defaults to `None`) --
  Whether to export the model from Diffusers to ONNX. If left to `None`, the model is exported only if no
  ONNX files are found in the `model_name_or_path` folder. If set to `True`, the model is always exported. If set to
  `False`, the model is never exported.
- **provider** (`str`, *optional*, defaults to `"CPUExecutionProvider"`) --
  The execution provider for ONNX Runtime. Can be `"CUDAExecutionProvider"`, `"DmlExecutionProvider"`,
  etc.
- **providers** (`Sequence[str]`, *optional*) --
  A list of execution providers for ONNX Runtime. Overrides `provider`.
- **provider_options** (`Union[Sequence[Dict[str, Any]], Dict[str, Any]]`, *optional*) --
  Options for each execution provider. Can be a single dictionary for the first provider or a list of
  dictionaries for each provider. The order of the dictionaries should match the order of the providers.
- **session_options** (`SessionOptions`, *optional*) --
  Options for the ONNX Runtime session. Can be used to set optimization levels, graph optimization,
  etc.
- **use_io_binding** (`bool`, *optional*) --
  Whether to use IOBinding for the ONNX Runtime session. If set to `True`, it will use IOBinding for
  input and output tensors.
- ****kwargs** --
  Can include the following:
  - Export arguments (e.g., `slim`, `dtype`, `device`, `no_dynamic_axes`, etc.).
  - Hugging Face Hub arguments (e.g., `revision`, `cache_dir`, `force_download`, etc.).
  - Preloaded models or sessions for the different components of the pipeline (e.g., `vae_encoder_session`,
  `vae_decoder_session`, `unet_session`, `transformer_session`, `image_encoder`, `safety_checker`, etc.).</paramsdesc><paramgroups>0</paramgroups><rettype>`ORTDiffusionPipeline`</rettype><retdesc>The loaded pipeline with ONNX Runtime sessions.</retdesc></docstring>
Instantiates a `ORTDiffusionPipeline` with ONNX Runtime sessions from a pretrained pipeline repo or directory.
This method can be used to export a diffusion pipeline to ONNX and/or load a pipeline with ONNX Runtime from a repo or a directory.








</div></div>

### ORTPipelineForText2Image[[optimum.onnxruntime.ORTPipelineForText2Image]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTPipelineForText2Image</name><anchor>optimum.onnxruntime.ORTPipelineForText2Image</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1223</source><parameters>[]</parameters></docstring>
`ORTPipelineForText2Image` is a generic pipeline class that instantiates a text-to-image pipeline class.
The specific underlying pipeline class is automatically selected from either the
`~ORTPipelineForText2Image.from_pretrained` or `~ORTPipelineForText2Image.from_pipe` methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
diffusion pipeline's components.
- **auto_model_class** (`Type[DiffusionPipeline]`) -- The corresponding/equivalent Diffusers pipeline class.
- **ort_pipelines_mapping** (`OrderedDict`) -- The mapping between the model names/architectures and the
corresponding ORT pipeline class.



</div>

### ORTPipelineForImage2Image[[optimum.onnxruntime.ORTPipelineForImage2Image]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTPipelineForImage2Image</name><anchor>optimum.onnxruntime.ORTPipelineForImage2Image</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1245</source><parameters>[]</parameters></docstring>
`ORTPipelineForImage2Image` is a generic pipeline class that instantiates an image-to-image pipeline class. The
specific underlying pipeline class is automatically selected from either the
`~ORTPipelineForImage2Image.from_pretrained` or `~ORTPipelineForImage2Image.from_pipe` methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.
- **auto_model_class** (`Type[DiffusionPipeline]`) -- The corresponding/equivalent Diffusers pipeline class.
- **ort_pipelines_mapping** (`OrderedDict`) -- The mapping between the model names/architectures and the
  corresponding ORT pipeline class.


</div>

### ORTPipelineForInpainting[[optimum.onnxruntime.ORTPipelineForInpainting]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTPipelineForInpainting</name><anchor>optimum.onnxruntime.ORTPipelineForInpainting</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1266</source><parameters>[]</parameters></docstring>
`ORTPipelineForInpainting` is a generic pipeline class that instantiates an inpainting pipeline class. The
specific underlying pipeline class is automatically selected from either the
`~ORTPipelineForInpainting.from_pretrained` or `~ORTPipelineForInpainting.from_pipe` methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.
- **auto_model_class** (`Type[DiffusionPipeline]`) -- The corresponding/equivalent Diffusers pipeline class.
- **ort_pipelines_mapping** (`OrderedDict`) -- The mapping between the model names/architectures and the
  corresponding ORT pipeline class.



</div>

## Supported ORT Diffusion Pipeline classes

The following classes are available for instantiating a diffusion pipeline class for a specific task and architecture.

### ORTStableDiffusionPipeline[[optimum.onnxruntime.ORTStableDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L901</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-to-image generation using Stable Diffusion and corresponding to [StableDiffusionPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L778</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the `~schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusionPipeline

>>> pipe = ORTStableORTDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusionImg2ImgPipeline[[optimum.onnxruntime.ORTStableDiffusionImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionImg2ImgPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L912</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image-to-image generation using Stable Diffusion and corresponding to [StableDiffusionImg2ImgPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img#diffusers.StableDiffusionImg2ImgPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L858</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the `~schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO

>>> from optimum.onnxruntime import ORTStableDiffusionImg2ImgPipeline

>>> device = "cuda"
>>> model_id_or_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
>>> pipe = ORTStableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))

>>> prompt = "A fantasy landscape, trending on artstation"

>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
>>> images[0].save("fantasy_landscape.png")
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusionInpaintPipeline[[optimum.onnxruntime.ORTStableDiffusionInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionInpaintPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L923</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image inpainting using Stable Diffusion and corresponding to [StableDiffusionInpaintPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint#diffusers.StableDiffusionInpaintPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py#L880</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be inpainted (which parts of the image to
  be masked out with `mask_image` and repainted according to `prompt`). For both numpy array and pytorch
  tensor, the expected value range is between `[0, 1]` If it's a tensor or a list or tensors, the
  expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a list of arrays, the
  expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image latents as `image`, but
  if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the `~schedulers.DDIMScheduler`, and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionInpaintPipeline.__call__.example">

Examples:

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from optimum.onnxruntime import ORTStableDiffusionInpaintPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

>>> init_image = download_image(img_url).resize((512, 512))
>>> mask_image = download_image(mask_url).resize((512, 512))

>>> pipe = ORTStableDiffusionInpaintPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-inpainting", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

</ExampleCodeBlock>






</div></div>

### ORTStableDiffusionXLPipeline[[optimum.onnxruntime.ORTStableDiffusionXLPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionXLPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L934</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-to-image generation using Stable Diffusion XL and corresponding to [StableDiffusionXLPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L836</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to `schedulers.DDIMScheduler`, will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionXLPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusionXLPipeline

>>> pipe = ORTStableDiffusionXLPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusionXLImg2ImgPipeline[[optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L957</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image-to-image generation using Stable Diffusion XL and corresponding to [StableDiffusionXLImg2ImgPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L986</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`) --
  The image(s) to modify with the pipeline.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. Note that in the case of
  `denoising_start` being declared as an integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to `schedulers.DDIMScheduler`, will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusionXLImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = ORTStableDiffusionXLImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"

>>> init_image = load_image(url).convert("RGB")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, image=init_image).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusionXLInpaintPipeline[[optimum.onnxruntime.ORTStableDiffusionXLInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionXLInpaintPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLInpaintPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L995</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image inpainting using Stable Diffusion XL and corresponding to [StableDiffusionXLInpaintPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py#L1091</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 0.9999) --
  Conceptually, indicates how much to transform the masked portion of the reference `image`. Must be
  between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the
  `strength`. The number of denoising steps depends on the amount of noise initially added. When
  `strength` is 1, added noise will be maximum and the denoising process will run for the full number of
  iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores the masked
  portion of the reference `image`. Note that in the case of `denoising_start` being declared as an
  integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to `schedulers.DDIMScheduler`, will be ignored for others.
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. `tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionXLInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusionXLInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = ORTStableDiffusionXLInpaintPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     torch_dtype=torch.float16,
...     variant="fp16",
...     use_safetensors=True,
... )
>>> pipe.to("cuda")

>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

>>> init_image = load_image(img_url).convert("RGB")
>>> mask_image = load_image(mask_url).convert("RGB")

>>> prompt = "A majestic tiger sitting on a bench"
>>> image = pipe(
...     prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80
... ).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusionXLImg2ImgPipeline[[optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L957</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image-to-image generation using Stable Diffusion XL and corresponding to [StableDiffusionXLImg2ImgPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L986</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`) --
  The image(s) to modify with the pipeline.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. Note that in the case of
  `denoising_start` being declared as an integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to `schedulers.DDIMScheduler`, will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusionXLImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusionXLImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = ORTStableDiffusionXLImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"

>>> init_image = load_image(url).convert("RGB")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, image=init_image).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTLatentConsistencyModelPipeline[[optimum.onnxruntime.ORTLatentConsistencyModelPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTLatentConsistencyModelPipeline</name><anchor>optimum.onnxruntime.ORTLatentConsistencyModelPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1033</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-to-image generation using a Latent Consistency Model and corresponding to [LatentConsistencyModelPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTLatentConsistencyModelPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py#L640</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 4"}, {"name": "original_inference_steps", "val": ": int = None"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 8.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps use to generate a linearly-spaced timestep schedule, from which
  we will draw `num_inference_steps` evenly spaced timesteps from as our final timestep schedule,
  following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the
  scheduler's `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending
  order.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
  Note that the original latent consistency models paper uses a different CFG formulation where the
  guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when `guidance_scale >
  0`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTLatentConsistencyModelPipeline.__call__.example">

Examples:
```py
>>> from optimum.onnxruntime import ORTDiffusionPipeline
>>> import torch

>>> pipe = ORTDiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32)

>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
>>> num_inference_steps = 4
>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images
>>> images[0].save("image.png")
```

</ExampleCodeBlock>







</div></div>

### ORTLatentConsistencyModelImg2ImgPipeline[[optimum.onnxruntime.ORTLatentConsistencyModelImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTLatentConsistencyModelImg2ImgPipeline</name><anchor>optimum.onnxruntime.ORTLatentConsistencyModelImg2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1044</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image-to-image generation using a Latent Consistency Model and corresponding to [LatentConsistencyModelImg2ImgPipeline]
(https://huggingface.co/docs/diffusers/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelImg2ImgPipeline).

This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTLatentConsistencyModelImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py#L709</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 4"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "original_inference_steps", "val": ": int = None"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 8.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps use to generate a linearly-spaced timestep schedule, from which
  we will draw `num_inference_steps` evenly spaced timesteps from as our final timestep schedule,
  following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the
  scheduler's `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending
  order.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
  Note that the original latent consistency models paper uses a different CFG formulation where the
  guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when `guidance_scale >
  0`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTLatentConsistencyModelImg2ImgPipeline.__call__.example">

Examples:
```py
>>> from optimum.onnxruntime import ORTPipelineForImage2Image
>>> import torch
>>> import PIL

>>> pipe = ORTPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32)

>>> prompt = "High altitude snowy mountains"
>>> image = PIL.Image.open("./snowy_mountains.png")

>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
>>> num_inference_steps = 4
>>> images = pipe(
...     prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0
... ).images

>>> images[0].save("image.png")
```

</ExampleCodeBlock>








</div></div>

### ORTStableDiffusion3Pipeline[[optimum.onnxruntime.ORTStableDiffusion3Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusion3Pipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusion3Pipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1067</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-to-image generation using Stable Diffusion 3 and corresponding to [StableDiffusion3Pipeline](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline).
This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusion3Pipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L772</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "skip_guidance_layers", "val": ": typing.List[int] = None"}, {"name": "skip_layer_guidance_scale", "val": ": float = 2.8"}, {"name": "skip_layer_guidance_stop", "val": ": float = 0.2"}, {"name": "skip_layer_guidance_start", "val": ": float = 0.01"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` instead of
  a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **skip_guidance_layers** (`List[int]`, *optional*) --
  A list of integers that specify layers to skip during guidance. If not provided, all layers will be
  used for guidance. If provided, the guidance will only be applied to the layers specified in the list.
  Recommended value by StabiltyAI for Stable Diffusion 3.5 Medium is [7, 8, 9].
- **skip_layer_guidance_scale** (`int`, *optional*) -- The scale of the guidance for the layers specified in
  `skip_guidance_layers`. The guidance will be applied to the layers specified in `skip_guidance_layers`
  with a scale of `skip_layer_guidance_scale`. The guidance will be applied to the rest of the layers
  with a scale of `1`.
- **skip_layer_guidance_stop** (`int`, *optional*) -- The step at which the guidance for the layers specified in
  `skip_guidance_layers` will stop. The guidance will be applied to the layers specified in
  `skip_guidance_layers` until the fraction specified in `skip_layer_guidance_stop`. Recommended value by
  StabiltyAI for Stable Diffusion 3.5 Medium is 0.2.
- **skip_layer_guidance_start** (`int`, *optional*) -- The step at which the guidance for the layers specified in
  `skip_guidance_layers` will start. The guidance will be applied to the layers specified in
  `skip_guidance_layers` from the fraction specified in `skip_layer_guidance_start`. Recommended value by
  StabiltyAI for Stable Diffusion 3.5 Medium is 0.01.
- **mu** (`float`, *optional*) -- `mu` value used for `dynamic_shifting`.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusion3Pipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusion3Pipeline

>>> pipe = ORTStableDiffusion3Pipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> image = pipe(prompt).images[0]
>>> image.save("sd3.png")
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusion3Img2ImgPipeline[[optimum.onnxruntime.ORTStableDiffusion3Img2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusion3Img2ImgPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusion3Img2ImgPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1075</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image-to-image generation using Stable Diffusion 3 and corresponding to [StableDiffusion3Img2ImgPipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Img2ImgPipeline).
This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusion3Img2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py#L829</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` instead of
  a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **mu** (`float`, *optional*) -- `mu` value used for `dynamic_shifting`.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusion3Img2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch

>>> from optimum.onnxruntime import ORTPipelineForImage2Image
>>> from diffusers.utils import load_image

>>> device = "cuda"
>>> model_id_or_path = "stabilityai/stable-diffusion-3-medium-diffusers"
>>> pipe = ORTPipelineForImage2Image.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> init_image = load_image(url).resize((1024, 1024))

>>> prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"

>>> images = pipe(prompt=prompt, image=init_image, strength=0.95, guidance_scale=7.5).images[0]
```

</ExampleCodeBlock>







</div></div>

### ORTStableDiffusion3InpaintPipeline[[optimum.onnxruntime.ORTStableDiffusion3InpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTStableDiffusion3InpaintPipeline</name><anchor>optimum.onnxruntime.ORTStableDiffusion3InpaintPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1094</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-guided image inpainting using Stable Diffusion 3 and corresponding to [StableDiffusion3InpaintPipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3InpaintPipeline).
This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTStableDiffusion3InpaintPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_inpaint.py#L921</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": int = None"}, {"name": "width", "val": ": int = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will ge generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will ge generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` instead of
  a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **mu** (`float`, *optional*) -- `mu` value used for `dynamic_shifting`.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTStableDiffusion3InpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTStableDiffusion3InpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = ORTStableDiffusion3InpaintPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image = pipe(prompt=prompt, image=source, mask_image=mask).images[0]
>>> image.save("sd3_inpainting.png")
```

</ExampleCodeBlock>







</div></div>

### ORTFluxPipeline[[optimum.onnxruntime.ORTFluxPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTFluxPipeline</name><anchor>optimum.onnxruntime.ORTFluxPipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_diffusion.py#L1102</source><parameters>[{"name": "unet_session", "val": ": InferenceSession | None = None"}, {"name": "transformer_session", "val": ": InferenceSession | None = None"}, {"name": "vae_decoder_session", "val": ": InferenceSession | None = None"}, {"name": "vae_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_2_session", "val": ": InferenceSession | None = None"}, {"name": "text_encoder_3_session", "val": ": InferenceSession | None = None"}, {"name": "scheduler", "val": ": SchedulerMixin | None = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer | None = None"}, {"name": "tokenizer_3", "val": ": CLIPTokenizer | None = None"}, {"name": "feature_extractor", "val": ": CLIPFeatureExtractor | None = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "add_watermarker", "val": ": bool | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Runtime-powered Pipeline for text-to-image generation using Flux and corresponding to [FluxPipeline](https://huggingface.co/docs/diffusers/api/pipelines/flux#diffusers.FluxPipeline).
This Pipeline inherits from `ORTDiffusionPipeline` and is used to run inference with the ONNX Runtime.
The pipeline can be loaded from a pretrained pipeline using the generic `ORTDiffusionPipeline.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>optimum.onnxruntime.ORTFluxPipeline.__call__</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/diffusers/pipelines/flux/pipeline_flux.py#L627</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  True classifier-free guidance (guidance scale) is enabled when `true_cfg_scale` > 1 and
  `negative_prompt` is provided.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `diffusers.pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`diffusers.pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`diffusers.pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTFluxPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from optimum.onnxruntime import ORTFluxPipeline

>>> pipe = ORTFluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell")
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
>>> image.save("flux.png")
```

</ExampleCodeBlock>







</div></div>

### Configuration
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/configuration.md

# Configuration

The configuration classes are the way to specify how a task should be done. There are two tasks supported with the ONNX Runtime package:

1. Optimization: Performed by the [ORTOptimizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/optimization#optimum.onnxruntime.ORTOptimizer), this task can be tweaked using an [OptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.OptimizationConfig).

2. Quantization: Performed by the [ORTQuantizer](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/quantization#optimum.onnxruntime.ORTQuantizer), quantization can be set using a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig). A calibration step is required in some cases (post training static quantization), which can be specified using a [CalibrationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.CalibrationConfig).

## OptimizationConfig[[optimum.onnxruntime.OptimizationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.OptimizationConfig</name><anchor>optimum.onnxruntime.OptimizationConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L709</source><parameters>[{"name": "optimization_level", "val": ": int = 1"}, {"name": "enable_transformers_specific_optimizations", "val": ": bool = True"}, {"name": "optimize_for_gpu", "val": ": bool = False"}, {"name": "fp16", "val": ": bool = False"}, {"name": "disable_gelu_fusion", "val": ": bool = False"}, {"name": "disable_attention_fusion", "val": ": bool = False"}, {"name": "disable_bias_gelu_fusion", "val": ": bool = False"}, {"name": "disable_layer_norm_fusion", "val": ": bool = False"}, {"name": "disable_rotary_embeddings", "val": ": bool = False"}, {"name": "disable_skip_layer_norm_fusion", "val": ": bool = False"}, {"name": "disable_bias_skip_layer_norm_fusion", "val": ": bool = False"}, {"name": "disable_skip_group_norm_fusion", "val": ": bool = False"}, {"name": "disable_bias_splitgelu_fusion", "val": ": bool = False"}, {"name": "disable_bias_add_fusion", "val": ": bool = False"}, {"name": "disable_group_norm_fusion", "val": ": bool = True"}, {"name": "disable_embed_layer_norm_fusion", "val": ": bool = True"}, {"name": "enable_gemm_fast_gelu_fusion", "val": ": bool = False"}, {"name": "use_mask_index", "val": ": bool = False"}, {"name": "disable_packed_kv", "val": ": bool = True"}, {"name": "no_attention_mask", "val": ": bool = False"}, {"name": "use_raw_attention_mask", "val": ": bool = False"}, {"name": "disable_shape_inference", "val": ": bool = False"}, {"name": "use_multi_head_attention", "val": ": bool = False"}, {"name": "enable_gelu_approximation", "val": ": bool = False"}, {"name": "use_group_norm_channels_first", "val": ": bool = False"}, {"name": "disable_packed_qkv", "val": ": bool = False"}, {"name": "disable_nhwc_conv", "val": ": bool = False"}]</parameters><paramsdesc>- **optimization_level** (`int`, defaults to 1) --
  Optimization level performed by ONNX Runtime of the loaded graph.
  Supported optimization level are 0, 1, 2 and 99.
  - 0: will disable all optimizations
  - 1: will enable basic optimizations
  - 2: will enable basic and extended optimizations, including complex node fusions applied to the nodes
  assigned to the CPU or CUDA execution provider, making the resulting optimized graph hardware dependent
  - 99: will enable all available optimizations including layout optimizations
- **optimize_for_gpu** (`bool`, defaults to `False`) --
  Whether to optimize the model for GPU inference.
  The optimized graph might contain operators for GPU or CPU only when `optimization_level` > 1.
- **fp16** (`bool`, defaults to `False`) --
  Whether all weights and nodes should be converted from float32 to float16.
- **enable_transformers_specific_optimizations** (`bool`, defaults to `True`) --
  Whether to only use `transformers` specific optimizations on top of ONNX Runtime general optimizations.
- **disable_gelu_fusion** (`bool`, defaults to `False`) --
  Whether to disable the Gelu fusion.
- **disable_layer_norm_fusion** (`bool`, defaults to `False`) --
  Whether to disable Layer Normalization fusion.
- **disable_attention_fusion** (`bool`, defaults to `False`) --
  Whether to disable Attention fusion.
- **disable_skip_layer_norm_fusion** (`bool`, defaults to `False`) --
  Whether to disable SkipLayerNormalization fusion.
- **disable_bias_skip_layer_norm_fusion** (`bool`, defaults to `False`) --
  Whether to disable Add Bias and SkipLayerNormalization fusion.
- **disable_bias_gelu_fusion** (`bool`, defaults to `False`) --
  Whether to disable Add Bias and Gelu / FastGelu fusion.
- **disable_embed_layer_norm_fusion** (`bool`, defaults to `True`) --
  Whether to disable EmbedLayerNormalization fusion.
  The default value is set to `True` since this fusion is incompatible with ONNX Runtime quantization.
- **enable_gelu_approximation** (`bool`, defaults to `False`) --
  Whether to enable Gelu / BiasGelu to FastGelu conversion.
  The default value is set to `False` since this approximation might slightly impact the model's accuracy.
- **use_mask_index** (`bool`, defaults to `False`) --
  Whether to use mask index instead of raw attention mask in the attention operator.
- **no_attention_mask** (`bool`, defaults to `False`) --
  Whether to not use attention masks. Only works for bert model type.
- **disable_embed_layer_norm** (`bool`, defaults to `True`) --
  Whether to disable EmbedLayerNormalization fusion.
  The default value is set to `True` since this fusion is incompatible with ONNX Runtime quantization
- **disable_shape_inference** (`bool`, defaults to `False`) --
  Whether to disable symbolic shape inference.
  The default value is set to `False` but symbolic shape inference might cause issues sometimes.
- **use_multi_head_attention** (`bool`, defaults to `False`) --
  Experimental argument. Use MultiHeadAttention instead of Attention operator, which has merged weights for Q/K/V projection,
  which might be faster in some cases since 3 MatMul is merged into one."
  "Note that MultiHeadAttention might be slower than Attention when qkv are not packed. "
- **enable_gemm_fast_gelu_fusion** (`bool`, defaults to `False`) --
  Enable GemmfastGelu fusion.
- **use_raw_attention_mask** (`bool`, defaults to `False`) --
  Use raw attention mask. Use this option if your input is not right-side padding. This might deactivate fused attention and get worse performance.
- **disable_group_norm_fusion** (`bool`, defaults to `True`) --
  Do not fuse GroupNorm. Only works for model_type=unet.
- **disable_packed_kv** (`bool`, defaults to `True`) --
  Do not use packed kv in cross attention. Only works for model_type=unet.
- **disable_rotary_embeddings** (`bool`, defaults to `False`) --
  Whether to disable Rotary Embedding fusion.</paramsdesc><paramgroups>0</paramgroups></docstring>
OptimizationConfig is the configuration class handling all the ONNX Runtime optimization parameters.
There are two stacks of optimizations:
1. The ONNX Runtime general-purpose optimization tool: it can work on any ONNX model.
2. The ONNX Runtime transformers optimization tool: it can only work on a subset of transformers models.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.AutoOptimizationConfig</name><anchor>optimum.onnxruntime.AutoOptimizationConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L823</source><parameters>[]</parameters></docstring>
Factory to create common `OptimizationConfig`.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>O1</name><anchor>optimum.onnxruntime.AutoOptimizationConfig.O1</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L880</source><parameters>[{"name": "for_gpu", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **for_gpu** (`bool`, defaults to `False`) --
  Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
  will run on. Only needed for optimization_level > 1.
- **kwargs** (`Dict[str, Any]`) --
  Arguments to provide to the `~OptimizationConfig` constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`OptimizationConfig`</rettype><retdesc>The `OptimizationConfig` corresponding to the O1 optimization level.</retdesc></docstring>
Creates an O1 `~OptimizationConfig`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>O2</name><anchor>optimum.onnxruntime.AutoOptimizationConfig.O2</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L896</source><parameters>[{"name": "for_gpu", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **for_gpu** (`bool`, defaults to `False`) --
  Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
  will run on. Only needed for optimization_level > 1.
- **kwargs** (`Dict[str, Any]`) --
  Arguments to provide to the `~OptimizationConfig` constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`OptimizationConfig`</rettype><retdesc>The `OptimizationConfig` corresponding to the O2 optimization level.</retdesc></docstring>
Creates an O2 `~OptimizationConfig`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>O3</name><anchor>optimum.onnxruntime.AutoOptimizationConfig.O3</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L912</source><parameters>[{"name": "for_gpu", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **for_gpu** (`bool`, defaults to `False`) --
  Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
  will run on. Only needed for optimization_level > 1.
- **kwargs** (`Dict[str, Any]`) --
  Arguments to provide to the `~OptimizationConfig` constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`OptimizationConfig`</rettype><retdesc>The `OptimizationConfig` corresponding to the O3 optimization level.</retdesc></docstring>
Creates an O3 `~OptimizationConfig`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>O4</name><anchor>optimum.onnxruntime.AutoOptimizationConfig.O4</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L928</source><parameters>[{"name": "for_gpu", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **for_gpu** (`bool`, defaults to `False`) --
  Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
  will run on. Only needed for optimization_level > 1.
- **kwargs** (`Dict[str, Any]`) --
  Arguments to provide to the `~OptimizationConfig` constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`OptimizationConfig`</rettype><retdesc>The `OptimizationConfig` corresponding to the O4 optimization level.</retdesc></docstring>
Creates an O4 `~OptimizationConfig`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>with_optimization_level</name><anchor>optimum.onnxruntime.AutoOptimizationConfig.with_optimization_level</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L848</source><parameters>[{"name": "optimization_level", "val": ": str"}, {"name": "for_gpu", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **optimization_level** (`str`) --
  The optimization level, the following values are allowed:
  - O1: Basic general optimizations
  - O2: Basic and extended general optimizations, transformers-specific fusions.
  - O3: Same as O2 with Fast Gelu approximation.
  - O4: Same as O3 with mixed precision.
- **for_gpu** (`bool`, defaults to `False`) --
  Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
  will run on. Only needed for optimization_level > 1.
- **kwargs** (`Dict[str, Any]`) --
  Arguments to provide to the `~OptimizationConfig` constructor.</paramsdesc><paramgroups>0</paramgroups><rettype>`OptimizationConfig`</rettype><retdesc>The `OptimizationConfig` corresponding to the requested optimization level.</retdesc></docstring>
Creates an `~OptimizationConfig` with pre-defined arguments according to an optimization level.








</div></div>

## QuantizationConfig[[optimum.onnxruntime.QuantizationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.QuantizationConfig</name><anchor>optimum.onnxruntime.QuantizationConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L225</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "format", "val": ": QuantFormat"}, {"name": "mode", "val": ": QuantizationMode = <QuantizationMode.QLinearOps: 1>"}, {"name": "activations_dtype", "val": ": QuantType = <QuantType.QUInt8: 1>"}, {"name": "activations_symmetric", "val": ": bool = False"}, {"name": "weights_dtype", "val": ": QuantType = <QuantType.QInt8: 0>"}, {"name": "weights_symmetric", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = False"}, {"name": "reduce_range", "val": ": bool = False"}, {"name": "nodes_to_quantize", "val": ": list[str] = <factory>"}, {"name": "nodes_to_exclude", "val": ": list[str] = <factory>"}, {"name": "operators_to_quantize", "val": ": list[str] = <factory>"}, {"name": "qdq_add_pair_to_weight", "val": ": bool = False"}, {"name": "qdq_dedicated_pair", "val": ": bool = False"}, {"name": "qdq_op_type_per_channel_support_to_axis", "val": ": dict[str, int] = <factory>"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Whether to apply static quantization or dynamic quantization.
- **format** (`QuantFormat`) --
  Targeted ONNX Runtime quantization representation format.
  For the Operator Oriented (QOperator) format, all the quantized operators have their own ONNX definitions.
  For the Tensor Oriented (QDQ) format, the model is quantized by inserting QuantizeLinear / DeQuantizeLinear
  operators.
- **mode** (`QuantizationMode`, defaults to `QuantizationMode.QLinearOps`) --
  Targeted ONNX Runtime quantization mode, default is QLinearOps to match QDQ format.
  When targeting dynamic quantization mode, the default value is `QuantizationMode.IntegerOps` whereas the
  default value for static quantization mode is `QuantizationMode.QLinearOps`.
- **activations_dtype** (`QuantType`, defaults to `QuantType.QUInt8`) --
  The quantization data types to use for the activations.
- **activations_symmetric** (`bool`, defaults to `False`) --
  Whether to apply symmetric quantization on the activations.
- **weights_dtype** (`QuantType`, defaults to `QuantType.QInt8`) --
  The quantization data types to use for the weights.
- **weights_symmetric** (`bool`, defaults to `True`) --
  Whether to apply symmetric quantization on the weights.
- **per_channel** (`bool`, defaults to `False`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can increase overall
  accuracy while making the quantized model heavier.
- **reduce_range** (`bool`, defaults to `False`) --
  Whether to use reduce-range 7-bits integers instead of 8-bits integers.
- **nodes_to_quantize** (`List[str]`, defaults to `[]`) --
  List of the nodes names to quantize. When unspecified, all nodes will be quantized. If empty, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`List[str]`, defaults to `[]`) --
  List of the nodes names to exclude when applying quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`List[str]`) --
  List of the operators types to quantize. Defaults to all quantizable operators for the given quantization mode and format. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.
- **qdq_add_pair_to_weight** (`bool`, defaults to `False`) --
  By default, floating-point weights are quantized and feed to solely inserted DeQuantizeLinear node.
  If set to True, the floating-point weights will remain and both QuantizeLinear / DeQuantizeLinear nodes
  will be inserted.
- **qdq_dedicated_pair** (`bool`, defaults to `False`) --
  When inserting QDQ pair, multiple nodes can share a single QDQ pair as their inputs. If True, it will
  create an identical and dedicated QDQ pair for each node.
- **qdq_op_type_per_channel_support_to_axis** (`Dict[str, int]`) --
  Set the channel axis for a specific operator type. Effective only when per channel quantization is
  supported and `per_channel` is set to True.</paramsdesc><paramgroups>0</paramgroups></docstring>
QuantizationConfig is the configuration class handling all the ONNX Runtime quantization parameters.




</div>

## AutoQuantizationConfig[[optimum.onnxruntime.AutoQuantizationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.AutoQuantizationConfig</name><anchor>optimum.onnxruntime.AutoQuantizationConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L388</source><parameters>[]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>arm64</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.arm64</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L389</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "use_symmetric_activations", "val": ": bool = False"}, {"name": "use_symmetric_weights", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = True"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Boolean flag to indicate whether we target static or dynamic quantization.
- **use_symmetric_activations** (`bool`, defaults to `False`) --
  Whether to use symmetric quantization for activations.
- **use_symmetric_weights** (`bool`, defaults to `True`) --
  Whether to use symmetric quantization for weights.
- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for ARM64.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>avx2</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.avx2</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L439</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "use_symmetric_activations", "val": ": bool = False"}, {"name": "use_symmetric_weights", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = True"}, {"name": "reduce_range", "val": ": bool = False"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Boolean flag to indicate whether we target static or dynamic quantization.
- **use_symmetric_activations** (`bool`, defaults to `False`) --
  Whether to use symmetric quantization for activations.
- **use_symmetric_weights** (`bool`, defaults to `True`) --
  Whether to use symmetric quantization for weights.
- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **reduce_range** (`bool`, defaults to `False`) --
  Indicate whether to use 8-bits integers (False) or reduce-range 7-bits integers (True).
  As a baseline, it is always recommended testing with full range (reduce_range = False) and then, if
  accuracy drop is significant, to try with reduced range (reduce_range = True).
  Intel's CPUs using AVX512 (non VNNI) can suffer from saturation issue when invoking
  the VPMADDUBSW instruction. To counter this, one should use 7-bits rather than 8-bits integers.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for CPU with AVX2 instruction set.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>avx512</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.avx512</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L494</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "use_symmetric_activations", "val": ": bool = False"}, {"name": "use_symmetric_weights", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = True"}, {"name": "reduce_range", "val": ": bool = False"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Boolean flag to indicate whether we target static or dynamic quantization.
- **use_symmetric_activations** (`bool`, defaults to `False`) --
  Whether to use symmetric quantization for activations.
- **use_symmetric_weights** (`bool`, defaults to `True`) --
  Whether to use symmetric quantization for weights.
- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **reduce_range** (`bool`, defaults to `False`) --
  Indicate whether to use 8-bits integers (False) or reduce-range 7-bits integers (True).
  As a baseline, it is always recommended testing with full range (reduce_range = False) and then, if
  accuracy drop is significant, to try with reduced range (reduce_range = True).
  Intel's CPUs using AVX512 (non VNNI) can suffer from saturation issue when invoking
  the VPMADDUBSW instruction. To counter this, one should use 7-bits rather than 8-bits integers.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for CPU with AVX512 instruction set.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>avx512_vnni</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.avx512_vnni</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L549</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "use_symmetric_activations", "val": ": bool = False"}, {"name": "use_symmetric_weights", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = True"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Boolean flag to indicate whether we target static or dynamic quantization.
- **use_symmetric_activations** (`bool`, defaults to `False`) --
  Whether to use symmetric quantization for activations.
- **use_symmetric_weights** (`bool`, defaults to `True`) --
  Whether to use symmetric quantization for weights.
- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for CPU with AVX512-VNNI instruction set.

When targeting Intel AVX512-VNNI CPU underlying execution engine leverage the CPU instruction VPDPBUSD to
compute  \\i32 += i8(w) * u8(x)\\ within a single instruction.

AVX512-VNNI (AVX512 Vector Neural Network Instruction)
is an x86 extension Instruction set and is a part of the AVX-512 ISA.

AVX512 VNNI is designed to accelerate convolutional neural network for INT8 inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>ppc64le</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.ppc64le</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L605</source><parameters>[{"name": "is_static", "val": ": bool"}, {"name": "use_symmetric_activations", "val": ": bool = False"}, {"name": "use_symmetric_weights", "val": ": bool = True"}, {"name": "per_channel", "val": ": bool = True"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **is_static** (`bool`) --
  Boolean flag to indicate whether we target static or dynamic quantization.
- **use_symmetric_activations** (`bool`, defaults to `False`) --
  Whether to use symmetric quantization for activations.
- **use_symmetric_weights** (`bool`, defaults to `True`) --
  Whether to use symmetric quantization for weights.
- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for ppc64le.

When targeting IBM POWER10 ppc64le, the underlying execution engine leverages 8-bit outer-product instructions
(e.g., xvi8ger4pp and signed/unsigned variants) to compute fused byte dot-products and accumulate into 32-bit results, i.e.,
i32 += i8(w) * u8(x) at 4-way granularity per output element within a single instruction using a 512-bit MMA accumulator.

MMA (Matrix-Multiply Assist) is a POWER10 extension of the Power ISA and is part of the Power ISA v3.1 specification,
exposed via VSX-backed 512-bit accumulators and compiler intrinsics.

POWER10 MMA 8-bit outer-product instructions are designed to accelerate INT8 inference on ppc64le by fusing
multiply-accumulate data paths and minimizing instruction count.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tensorrt</name><anchor>optimum.onnxruntime.AutoQuantizationConfig.tensorrt</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L663</source><parameters>[{"name": "per_channel", "val": ": bool = True"}, {"name": "nodes_to_quantize", "val": ": list[str] | None = None"}, {"name": "nodes_to_exclude", "val": ": list[str] | None = None"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **per_channel** (`bool`, defaults to `True`) --
  Whether we should quantize per-channel (also known as "per-row"). Enabling this can
  increase overall accuracy while making the quantized model heavier.
- **nodes_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to quantize. If `None`, all nodes being operators from `operators_to_quantize` will be quantized.
- **nodes_to_exclude** (`Optional[List[str]]`, defaults to `None`) --
  Specific nodes to exclude from quantization. The list of nodes in a model can be found loading the ONNX model through onnx.load, or through visual inspection with [netron](https://github.com/lutzroeder/netron).
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  Type of nodes to perform quantization on. By default, all the quantizable operators will be quantized. Quantizable operators can be found at https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/registry.py.</paramsdesc><paramgroups>0</paramgroups></docstring>
Creates a [QuantizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.QuantizationConfig) fit for TensorRT static quantization, targeting NVIDIA GPUs.




</div></div>

### CalibrationConfig[[optimum.onnxruntime.CalibrationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.CalibrationConfig</name><anchor>optimum.onnxruntime.CalibrationConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L52</source><parameters>[{"name": "dataset_name", "val": ": str"}, {"name": "dataset_config_name", "val": ": str"}, {"name": "dataset_split", "val": ": str"}, {"name": "dataset_num_samples", "val": ": int"}, {"name": "method", "val": ": CalibrationMethod"}, {"name": "num_bins", "val": ": int | None = None"}, {"name": "num_quantized_bins", "val": ": int | None = None"}, {"name": "percentile", "val": ": float | None = None"}, {"name": "moving_average", "val": ": bool | None = None"}, {"name": "averaging_constant", "val": ": float | None = None"}]</parameters><paramsdesc>- **dataset_name** (`str`) --
  The name of the calibration dataset.
- **dataset_config_name** (`str`) --
  The name of the calibration dataset configuration.
- **dataset_split** (`str`) --
  Which split of the dataset is used to perform the calibration step.
- **dataset_num_samples** (`int`) --
  The number of samples composing the calibration dataset.
- **method** (`CalibrationMethod`) --
  The method chosen to calculate the activations quantization parameters using the calibration dataset.
- **num_bins** (`Optional[int]`, defaults to `None`) --
  The number of bins to use when creating the histogram when performing the calibration step using the
  Percentile or Entropy method.
- **num_quantized_bins** (`Optional[int]`, defaults to `None`) --
  The number of quantized bins to use when performing the calibration step using the Entropy method.
- **percentile** (`Optional[float]`, defaults to `None`) --
  The percentile to use when computing the activations quantization ranges when performing the calibration
  step using the Percentile method.
- **moving_average** (`Optional[bool]`, defaults to `None`) --
  Whether to compute the moving average of the minimum and maximum values when performing the calibration step
  using the MinMax method.
- **averaging_constant** (`Optional[float]`, defaults to `None`) --
  The constant smoothing factor to use when computing the moving average of the minimum and maximum values.
  Effective only when the MinMax calibration method is selected and `moving_average` is set to True.</paramsdesc><paramgroups>0</paramgroups></docstring>
CalibrationConfig is the configuration class handling all the ONNX Runtime parameters related to the calibration
step of static quantization.




</div>

## ORTConfig[[optimum.onnxruntime.ORTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTConfig</name><anchor>optimum.onnxruntime.ORTConfig</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/configuration.py#L945</source><parameters>[{"name": "opset", "val": ": int | None = None"}, {"name": "use_external_data_format", "val": ": bool = False"}, {"name": "one_external_file", "val": ": bool = True"}, {"name": "optimization", "val": ": OptimizationConfig | None = None"}, {"name": "quantization", "val": ": QuantizationConfig | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **opset** (`Optional[int]`, defaults to `None`) --
  ONNX opset version to export the model with.
- **use_external_data_format** (`bool`, defaults to `False`) --
  Allow exporting model >= than 2Gb.
- **one_external_file** (`bool`, defaults to `True`) --
  When `use_external_data_format=True`, whether to save all tensors to one external file.
  If false, save each tensor to a file named with the tensor name.
  (Can not be set to `False` for the quantization)
- **optimization** (`Optional[OptimizationConfig]`, defaults to `None`) --
  Specify a configuration to optimize ONNX Runtime model
- **quantization** (`Optional[QuantizationConfig]`, defaults to `None`) --
  Specify a configuration to quantize ONNX Runtime model</paramsdesc><paramgroups>0</paramgroups></docstring>
ORTConfig is the configuration class handling all the ONNX Runtime parameters related to the ONNX IR model export,
optimization and quantization parameters.




</div>

### ONNX Runtime Models
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/modeling.md

# ONNX Runtime Models

## Generic model classes

The following ORT classes are available for instantiating a base model class without a specific head.

### ORTModel[[optimum.onnxruntime.ORTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModel</name><anchor>optimum.onnxruntime.ORTModel</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L156</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters><paramsdesc>- **-** config ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.0/en/main_classes/configuration#transformers.PretrainedConfig) -- The configuration of the model. --
- **-** session (`~onnxruntime.InferenceSession`) -- The ONNX Runtime InferenceSession that is running the model. --
- **-** use_io_binding (`bool`, *optional*, defaults to `True`) -- Whether to use I/O bindings with **ONNX Runtime --
- **with** the CUDAExecutionProvider**, this can significantly speedup inference depending on the task. --
- **-** model_save_dir (`Path`) -- The directory where the model exported to ONNX is saved. --
- **By** defaults, if the loaded model is local, the directory where the original model will be used. Otherwise, the --
- **cache** directory is used. --</paramsdesc><paramgroups>0</paramgroups></docstring>
Base class for implementing models using ONNX Runtime.

The ORTModel implements generic methods for interacting with the Hugging Face Hub as well as exporting vanilla
transformers models to ONNX using `optimum.exporters.onnx` toolchain.

Class attributes:
- model_type (`str`, *optional*, defaults to `"onnx_model"`) -- The name of the model type to use when
registering the ORTModel classes.
- auto_model_class (`Type`, *optional*, defaults to `AutoModel`) -- The "AutoModel" class to represented by the
current ORTModel class.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>can_generate</name><anchor>optimum.onnxruntime.ORTModel.can_generate</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L594</source><parameters>[]</parameters></docstring>
Returns whether this model can generate sequences with `.generate()`.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>optimum.onnxruntime.ORTModel.from_pretrained</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L468</source><parameters>[{"name": "model_id", "val": ": str | Path"}, {"name": "config", "val": ": PretrainedConfig | None = None"}, {"name": "export", "val": ": bool = False"}, {"name": "subfolder", "val": ": str = ''"}, {"name": "revision", "val": ": str = 'main'"}, {"name": "force_download", "val": ": bool = False"}, {"name": "local_files_only", "val": ": bool = False"}, {"name": "trust_remote_code", "val": ": bool = False"}, {"name": "cache_dir", "val": ": str = '/home/runner/.cache/huggingface/hub'"}, {"name": "token", "val": ": bool | str | None = None"}, {"name": "provider", "val": ": str = 'CPUExecutionProvider'"}, {"name": "providers", "val": ": Sequence[str] | None = None"}, {"name": "provider_options", "val": ": Sequence[dict[str, Any]] | dict[str, Any] | None = None"}, {"name": "session_options", "val": ": SessionOptions | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_id** (`Union[str, Path]`) --
  Can be either:
  - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
    Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
    user or organization name, like `dbmdz/bert-base-german-cased`.
  - A path to a *directory* containing a model saved using `~OptimizedModel.save_pretrained`,
    e.g., `./my_model_directory/`.
- **export** (`bool`, defaults to `False`) --
  Defines whether the provided `model_id` needs to be exported to the targeted format.
- **force_download** (`bool`, defaults to `True`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).
- **cache_dir** (`Optional[str]`, defaults to `None`) --
  Path to a directory in which a downloaded pretrained model configuration should be cached if the
  standard cache should not be used.
- **subfolder** (`str`, defaults to `""`) --
  In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can
  specify the folder name here.
- **config** (`Optional[transformers.PretrainedConfig]`, defaults to `None`) --
  The model configuration.
- **local_files_only** (`Optional[bool]`, defaults to `False`) --
  Whether or not to only look at local files (i.e., do not try to download the model).
- **trust_remote_code** (`bool`, defaults to `False`) --
  Whether or not to allow for custom code defined on the Hub in their own modeling. This option should only be set
  to `True` for repositories you trust and in which you have read the code, as it will execute code present on
  the Hub on your local machine.
- **revision** (`Optional[str]`, defaults to `None`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
  git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
  identifier allowed by git.</paramsdesc><paramgroups>0</paramgroups><rettype>`ORTModel`</rettype><retdesc>The loaded ORTModel model.</retdesc></docstring>

Instantiate a pretrained model from a pre-trained model configuration.



provider (`str`, defaults to `"CPUExecutionProvider"`):
ONNX Runtime provider to use for loading the model.
See https://onnxruntime.ai/docs/execution-providers/ for possible providers.
providers (`Optional[Sequence[str]]`, defaults to `None`):
List of execution providers to use for loading the model.
This argument takes precedence over the `provider` argument.
provider_options (`Optional[Dict[str, Any]]`, defaults to `None`):
Provider option dictionaries corresponding to the provider used. See available options
for each provider: https://onnxruntime.ai/docs/api/c/group___global.html .
session_options (`Optional[onnxruntime.SessionOptions]`, defaults to `None`),:
ONNX Runtime session options to use for loading the model.
use_io_binding (`Optional[bool]`, defaults to `None`):
Whether to use IOBinding during inference to avoid memory copy between the host and device, or between numpy/torch tensors and ONNX Runtime ORTValue. Defaults to
`True` if the execution provider is CUDAExecutionProvider. For [~onnxruntime.ORTModelForCausalLM], defaults to `True` on CPUExecutionProvider,
in all other cases defaults to `False`.
kwargs (`Dict[str, Any]`):
Will be passed to the underlying model loading methods.

> Parameters for decoder models (ORTModelForCausalLM, ORTModelForSeq2SeqLM, ORTModelForSeq2SeqLM, ORTModelForSpeechSeq2Seq, ORTModelForVision2Seq)

use_cache (`Optional[bool]`, defaults to `True`):
Whether or not past key/values cache should be used. Defaults to `True`.

use_merged (`Optional[bool]`, defaults to `None`):
whether or not to use a single ONNX that handles both the decoding without and with past key values reuse. This option defaults
to `True` if loading from a local repository and a merged decoder is found. When exporting with `export=True`,
defaults to `False`. This option should be set to `True` to minimize memory usage.






</div></div>

## Natural Language Processing

The following ORT classes are available for the following natural language processing tasks.

### ORTModelForCausalLM[[optimum.onnxruntime.ORTModelForCausalLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForCausalLM</name><anchor>optimum.onnxruntime.ORTModelForCausalLM</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_decoder.py#L123</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "generation_config", "val": ": GenerationConfig | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX model with a causal language modeling head for ONNX Runtime inference. This class officially supports bloom, codegen, falcon, gpt2, gpt-bigcode, gpt_neo, gpt_neox, gptj, llama.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForCausalLM.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_decoder.py#L249</source><parameters>[{"name": "input_ids", "val": ": torch.LongTensor"}, {"name": "attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.LongTensor`) --
  Indices of decoder input sequence tokens in the vocabulary of shape `(batch_size, sequence_length)`.
- **attention_mask** (`torch.LongTensor`) --
  Mask to avoid performing attention on padding token indices, of shape
  `(batch_size, sequence_length)`. Mask values selected in `[0, 1]`.
- **past_key_values** (`tuple(tuple(torch.FloatTensor), *optional*, defaults to `None`)` --
  Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
  The tuple is of length `config.n_layers` with each tuple having 2 tensors of shape
  `(batch_size, num_heads, sequence_length, embed_size_per_head)`.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForCausalLM` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForCausalLM.forward.example">

Example of text generation:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCausalLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("optimum/gpt2")

>>> inputs = tokenizer("My name is Arthur and I live in", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs,do_sample=True,temperature=0.9, min_length=20,max_length=20)
>>> tokenizer.batch_decode(gen_tokens)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForCausalLM.forward.example-2">

Example using `transformers.pipelines`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("optimum/gpt2")
>>> onnx_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)

>>> text = "My name is Arthur and I live in"
>>> gen = onnx_gen(text)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForMaskedLM[[optimum.onnxruntime.ORTModelForMaskedLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForMaskedLM</name><anchor>optimum.onnxruntime.ORTModelForMaskedLM</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L774</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a MaskedLMOutput for masked language modeling tasks. This class officially supports albert, bert, camembert, convbert, data2vec-text, deberta, deberta_v2, distilbert, electra, flaubert, ibert, mobilebert, roberta, roformer, squeezebert, xlm, xlm_roberta.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForMaskedLM.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L780</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForMaskedLM` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForMaskedLM.forward.example">

Example of feature extraction:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForMaskedLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased-for-fill-mask")
>>> model = ORTModelForMaskedLM.from_pretrained("optimum/bert-base-uncased-for-fill-mask")

>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 8, 28996]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForMaskedLM.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForMaskedLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased-for-fill-mask")
>>> model = ORTModelForMaskedLM.from_pretrained("optimum/bert-base-uncased-for-fill-mask")
>>> fill_masker = pipeline("fill-mask", model=model, tokenizer=tokenizer)

>>> text = "The capital of France is [MASK]."
>>> pred = fill_masker(text)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForSeq2SeqLM[[optimum.onnxruntime.ORTModelForSeq2SeqLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForSeq2SeqLM</name><anchor>optimum.onnxruntime.ORTModelForSeq2SeqLM</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1218</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "encoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_with_past_session", "val": ": InferenceSession | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "generation_config", "val": ": GenerationConfig | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
Sequence-to-sequence model with a language modeling head for ONNX Runtime inference. This class officially supports bart, blenderbot, blenderbot-small, longt5, m2m_100, marian, mbart, mt5, pegasus, t5.
This model inherits from `~onnxruntime.modeling.ORTModelForConditionalGeneration`, check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the `onnxruntime.modeling.ORTModelForConditionalGeneration.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForSeq2SeqLM.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1226</source><parameters>[{"name": "input_ids", "val": ": torch.LongTensor = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "encoder_outputs", "val": ": BaseModelOutput | list[torch.FloatTensor] | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "cache_position", "val": ": torch.Tensor | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`torch.LongTensor`) --
  Indices of input sequence tokens in the vocabulary of shape `(batch_size, encoder_sequence_length)`.
- **attention_mask** (`torch.LongTensor`) --
  Mask to avoid performing attention on padding token indices, of shape
  `(batch_size, encoder_sequence_length)`. Mask values selected in `[0, 1]`.
- **decoder_input_ids** (`torch.LongTensor`) --
  Indices of decoder input sequence tokens in the vocabulary of shape `(batch_size, decoder_sequence_length)`.
- **encoder_outputs** (`torch.FloatTensor`) --
  The encoder `last_hidden_state` of shape `(batch_size, encoder_sequence_length, hidden_size)`.
- **past_key_values** (`tuple(tuple(torch.FloatTensor), *optional*, defaults to `None`)` --
  Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
  The tuple is of length `config.n_layers` with each tuple having 2 tensors of shape
  `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)` and 2 additional tensors of shape
  `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForSeq2SeqLM` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSeq2SeqLM.forward.example">

Example of text generation:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")

>>> inputs = tokenizer("My name is Eustache and I like to", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSeq2SeqLM.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")
>>> onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)

>>> text = "My name is Eustache."
>>> pred = onnx_translation(text)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForSequenceClassification[[optimum.onnxruntime.ORTModelForSequenceClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForSequenceClassification</name><anchor>optimum.onnxruntime.ORTModelForSequenceClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L988</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks. This class officially supports albert, bart, bert, camembert, convbert, data2vec-text, deberta, deberta_v2, distilbert, electra, flaubert, ibert, mbart, mobilebert, nystromformer, roberta, roformer, squeezebert, xlm, xlm_roberta.

This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForSequenceClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L996</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForSequenceClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSequenceClassification.forward.example">

Example of single-label classification:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSequenceClassification.forward.example-2">

Example using `transformers.pipelines`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)

>>> text = "Hello, my dog is cute"
>>> pred = onnx_classifier(text)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSequenceClassification.forward.example-3">

Example using zero-shot-classification `transformers.pipelines`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> onnx_z0 = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)

>>> sequence_to_classify = "Who are you voting for in 2020?"
>>> candidate_labels = ["Europe", "public health", "politics", "elections"]
>>> pred = onnx_z0(sequence_to_classify, candidate_labels, multi_label=True)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForTokenClassification[[optimum.onnxruntime.ORTModelForTokenClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForTokenClassification</name><anchor>optimum.onnxruntime.ORTModelForTokenClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1089</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks. This class officially supports albert, bert, bloom, camembert, convbert, data2vec-text, deberta, deberta_v2, distilbert, electra, flaubert, gpt2, ibert, mobilebert, roberta, roformer, squeezebert, xlm, xlm_roberta.


This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForTokenClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1098</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForTokenClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForTokenClassification.forward.example">

Example of token classification:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForTokenClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 12, 9]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForTokenClassification.forward.example-2">

Example using `transformers.pipelines`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")
>>> onnx_ner = pipeline("token-classification", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_ner(text)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForMultipleChoice[[optimum.onnxruntime.ORTModelForMultipleChoice]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForMultipleChoice</name><anchor>optimum.onnxruntime.ORTModelForMultipleChoice</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1185</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks. This class officially supports albert, bert, camembert, convbert, data2vec-text, deberta_v2, distilbert, electra, flaubert, ibert, mobilebert, nystromformer, roberta, roformer, squeezebert, xlm, xlm_roberta.

This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForMultipleChoice.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1193</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForMultipleChoice` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForMultipleChoice.forward.example">

Example of multiple choice:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForMultipleChoice

>>> tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
>>> model = ORTModelForMultipleChoice.from_pretrained("ehdwns1516/bert-base-uncased_SWAG", export=True)

>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
...     "A drum line passes by walking down the street playing their instruments.",
...     "A drum line has heard approaching them.",
...     "A drum line arrives and they're outside dancing and asleep.",
...     "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)

# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
...     inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits
```

</ExampleCodeBlock>


</div></div>

### ORTModelForQuestionAnswering[[optimum.onnxruntime.ORTModelForQuestionAnswering]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForQuestionAnswering</name><anchor>optimum.onnxruntime.ORTModelForQuestionAnswering</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L873</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD. This class officially supports albert, bart, bert, camembert, convbert, data2vec-text, deberta, deberta_v2, distilbert, electra, flaubert, gptj, ibert, mbart, mobilebert, nystromformer, roberta, roformer, squeezebert, xlm, xlm_roberta.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForQuestionAnswering.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L879</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForQuestionAnswering` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForQuestionAnswering.forward.example">

Example of question answering:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="np")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForQuestionAnswering.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")
>>> onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> pred = onnx_qa(question, text)
```

</ExampleCodeBlock>


</div></div>

## Computer vision

The following ORT classes are available for the following computer vision tasks.

### ORTModelForImageClassification[[optimum.onnxruntime.ORTModelForImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForImageClassification</name><anchor>optimum.onnxruntime.ORTModelForImageClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1290</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model for image-classification tasks. This class officially supports beit, convnext, convnextv2, data2vec-vision, deit, dinov2, levit, mobilenet_v1, mobilenet_v2, mobilevit, poolformer, resnet, segformer, swin, swinv2, vit.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForImageClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1295</source><parameters>[{"name": "pixel_values", "val": ": torch.Tensor | np.ndarray"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForImageClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForImageClassification.forward.example">

Example of image classification:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.onnxruntime import ORTModelForImageClassification
>>> from transformers import AutoFeatureExtractor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")

>>> inputs = preprocessor(images=image, return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForImageClassification.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> import requests
>>> from PIL import Image
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForImageClassification

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")
>>> onnx_image_classifier = pipeline("image-classification", model=model, feature_extractor=preprocessor)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> pred = onnx_image_classifier(url)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForZeroShotImageClassification[[optimum.onnxruntime.ORTModelForZeroShotImageClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForZeroShotImageClassification</name><anchor>optimum.onnxruntime.ORTModelForZeroShotImageClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1346</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model for zero-shot-image-classification tasks. This class officially supports clip, metaclip-2.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForZeroShotImageClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1351</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray"}, {"name": "pixel_values", "val": ": torch.Tensor | np.ndarray"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
[`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
[What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)
- **pixel_values** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForZeroShotImageClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>








</div></div>

### ORTModelForSemanticSegmentation[[optimum.onnxruntime.ORTModelForSemanticSegmentation]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForSemanticSegmentation</name><anchor>optimum.onnxruntime.ORTModelForSemanticSegmentation</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1446</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
ONNX Model for semantic-segmentation, with an all-MLP decode head on top e.g. for ADE20k, CityScapes. This class officially supports maskformer, segformer.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForSemanticSegmentation.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1460</source><parameters>[{"name": "pixel_values", "val": ": torch.Tensor | np.ndarray"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, num_channels, height, width)`, defaults to `None`) --
  Pixel values corresponding to the images in the current batch.
  Pixel values can be obtained from encoded images using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForSemanticSegmentation` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSemanticSegmentation.forward.example">

Example of semantic segmentation:

```python
>>> import requests
>>> from PIL import Image
>>> from optimum.onnxruntime import ORTModelForSemanticSegmentation
>>> from transformers import AutoFeatureExtractor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")

>>> inputs = preprocessor(images=image, return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSemanticSegmentation.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> import requests
>>> from PIL import Image
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForSemanticSegmentation

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> onnx_image_segmenter = pipeline("image-segmentation", model=model, feature_extractor=preprocessor)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> pred = onnx_image_segmenter(url)
```

</ExampleCodeBlock>


</div></div>

## Audio

The following ORT classes are available for the following audio tasks.

### ORTModelForAudioClassification[[optimum.onnxruntime.ORTModelForAudioClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForAudioClassification</name><anchor>optimum.onnxruntime.ORTModelForAudioClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1565</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model for audio-classification, with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting. This class officially supports audio_spectrogram_transformer, data2vec-audio, hubert, sew, sew-d, unispeech, unispeech_sat, wavlm, wav2vec2, wav2vec2-conformer.

This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForAudioClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1573</source><parameters>[{"name": "input_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "input_features", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForAudioClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForAudioClassification.forward.example">

Example of audio classification:

```python
>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/hubert-base-superb-ks")
>>> model = ORTModelForAudioClassification.from_pretrained("optimum/hubert-base-superb-ks")

>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForAudioClassification.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForAudioClassification

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/hubert-base-superb-ks")
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")

>>> model = ORTModelForAudioClassification.from_pretrained("optimum/hubert-base-superb-ks")
>>> onnx_ac = pipeline("audio-classification", model=model, feature_extractor=feature_extractor)

>>> pred = onnx_ac(dataset[0]["audio"]["array"])
```

</ExampleCodeBlock>


</div></div>

### ORTModelForAudioFrameClassification[[optimum.onnxruntime.ORTModelForAudioFrameClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForAudioFrameClassification</name><anchor>optimum.onnxruntime.ORTModelForAudioFrameClassification</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1854</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a frame classification head on top for tasks like Speaker Diarization. This class officially supports data2vec-audio, unispeech_sat, wavlm, wav2vec2, wav2vec2-conformer.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForAudioFrameClassification.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1860</source><parameters>[{"name": "input_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForAudioFrameClassification` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForAudioFrameClassification.forward.example">

Example of audio frame classification:

```python
>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioFrameClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/wav2vec2-base-superb-sd")
>>> model =  ORTModelForAudioFrameClassification.from_pretrained("optimum/wav2vec2-base-superb-sd")

>>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> probabilities = torch.sigmoid(logits[0])
>>> labels = (probabilities > 0.5).long()
>>> labels[0].tolist()
```

</ExampleCodeBlock>


</div></div>

### ORTModelForCTC[[optimum.onnxruntime.ORTModelForCTC]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForCTC</name><anchor>optimum.onnxruntime.ORTModelForCTC</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1667</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with a language modeling head on top for Connectionist Temporal Classification (CTC). This class officially supports data2vec-audio, hubert, sew, sew-d, unispeech, unispeech_sat, wavlm, wav2vec2, wav2vec2-conformer.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForCTC.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1673</source><parameters>[{"name": "input_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "input_features", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForCTC` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForCTC.forward.example">

Example of CTC:

```python
>>> from transformers import AutoProcessor, HubertForCTC
>>> from optimum.onnxruntime import ORTModelForCTC
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> processor = AutoProcessor.from_pretrained("optimum/hubert-large-ls960-ft")
>>> model = ORTModelForCTC.from_pretrained("optimum/hubert-large-ls960-ft")

>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)

>>> transcription = processor.batch_decode(predicted_ids)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForSpeechSeq2Seq[[optimum.onnxruntime.ORTModelForSpeechSeq2Seq]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForSpeechSeq2Seq</name><anchor>optimum.onnxruntime.ORTModelForSpeechSeq2Seq</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1283</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Speech sequence-to-sequence model with a language modeling head for ONNX Runtime inference. This class officially supports whisper, speech_to_text.
This model inherits from `~onnxruntime.modeling.ORTModelForConditionalGeneration`, check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the `onnxruntime.modeling.ORTModelForConditionalGeneration.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForSpeechSeq2Seq.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1308</source><parameters>[{"name": "input_features", "val": ": torch.FloatTensor | None = None"}, {"name": "attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "encoder_outputs", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "cache_position", "val": ": torch.Tensor | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_features** (`torch.FloatTensor`) --
  Mel features extracted from the raw speech waveform.
  `(batch_size, feature_size, encoder_sequence_length)`.
- **decoder_input_ids** (`torch.LongTensor`) --
  Indices of decoder input sequence tokens in the vocabulary of shape `(batch_size, decoder_sequence_length)`.
- **encoder_outputs** (`torch.FloatTensor`) --
  The encoder `last_hidden_state` of shape `(batch_size, encoder_sequence_length, hidden_size)`.
- **past_key_values** (`tuple(tuple(torch.FloatTensor), *optional*, defaults to `None`)` --
  Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
  The tuple is of length `config.n_layers` with each tuple having 2 tensors of shape
  `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)` and 2 additional tensors of shape
  `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForSpeechSeq2Seq` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSpeechSeq2Seq.forward.example">

Example of text generation:

```python
>>> from transformers import AutoProcessor
>>> from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("optimum/whisper-tiny.en")
>>> model = ORTModelForSpeechSeq2Seq.from_pretrained("optimum/whisper-tiny.en")

>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor.feature_extractor(ds[0]["audio"]["array"], return_tensors="pt")

>>> gen_tokens = model.generate(inputs=inputs.input_features)
>>> outputs = processor.tokenizer.batch_decode(gen_tokens)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForSpeechSeq2Seq.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoProcessor, pipeline
>>> from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("optimum/whisper-tiny.en")
>>> model = ORTModelForSpeechSeq2Seq.from_pretrained("optimum/whisper-tiny.en")
>>> speech_recognition = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor)

>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> pred = speech_recognition(ds[0]["audio"]["array"])
```

</ExampleCodeBlock>


</div></div>

### ORTModelForAudioXVector[[optimum.onnxruntime.ORTModelForAudioXVector]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForAudioXVector</name><anchor>optimum.onnxruntime.ORTModelForAudioXVector</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1768</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model with an XVector feature extraction head on top for tasks like Speaker Verification. This class officially supports data2vec-audio, unispeech_sat, wavlm, wav2vec2, wav2vec2-conformer.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForAudioXVector.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L1774</source><parameters>[{"name": "input_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`) --
  Float values of input raw speech waveform..
  Input values can be obtained from audio file loaded into an array using [`AutoFeatureExtractor`](https://huggingface.co/docs/transformers/autoclass_tutorial#autofeatureextractor).</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForAudioXVector` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForAudioXVector.forward.example">

Example of Audio XVector:

```python
>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioXVector
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/wav2vec2-base-superb-sv")
>>> model = ORTModelForAudioXVector.from_pretrained("optimum/wav2vec2-base-superb-sv")

>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(
...     [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
>>> with torch.no_grad():
...     embeddings = model(**inputs).embeddings

>>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()

>>> cosine_sim = torch.nn.CosineSimilarity(dim=-1)
>>> similarity = cosine_sim(embeddings[0], embeddings[1])
>>> threshold = 0.7
>>> if similarity < threshold:
...     print("Speakers are not the same!")
>>> round(similarity.item(), 2)
```

</ExampleCodeBlock>


</div></div>

## Multimodal

The following ORT classes are available for the following multimodal tasks.

### ORTModelForVision2Seq[[optimum.onnxruntime.ORTModelForVision2Seq]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForVision2Seq</name><anchor>optimum.onnxruntime.ORTModelForVision2Seq</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1455</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "encoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_with_past_session", "val": ": InferenceSession | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "generation_config", "val": ": GenerationConfig | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
Vision sequence-to-sequence model with a language modeling head for ONNX Runtime inference. This class officially supports vision encoder-decoder and pix2struct.
This model inherits from `~onnxruntime.modeling.ORTModelForConditionalGeneration`, check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the `onnxruntime.modeling.ORTModelForConditionalGeneration.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForVision2Seq.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1470</source><parameters>[{"name": "pixel_values", "val": ": torch.FloatTensor | None = None"}, {"name": "attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_attention_mask", "val": ": torch.BoolTensor | None = None"}, {"name": "encoder_outputs", "val": ": BaseModelOutput | list[torch.FloatTensor] | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "cache_position", "val": ": torch.Tensor | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pixel_values** (`torch.FloatTensor`) --
  Features extracted from an Image. This tensor should be of shape
  `(batch_size, num_channels, height, width)`.
- **decoder_input_ids** (`torch.LongTensor`) --
  Indices of decoder input sequence tokens in the vocabulary of shape `(batch_size, decoder_sequence_length)`.
- **encoder_outputs** (`torch.FloatTensor`) --
  The encoder `last_hidden_state` of shape `(batch_size, encoder_sequence_length, hidden_size)`.
- **past_key_values** (`tuple(tuple(torch.FloatTensor), *optional*, defaults to `None`)` --
  Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
  The tuple is of length `config.n_layers` with each tuple having 2 tensors of shape
  `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)` and 2 additional tensors of shape
  `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForVision2Seq` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForVision2Seq.forward.example">

Example of text generation:

```python
>>> from transformers import AutoImageProcessor, AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForVision2Seq
>>> from PIL import Image
>>> import requests


>>> processor = AutoImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> model = ORTModelForVision2Seq.from_pretrained("nlpconnect/vit-gpt2-image-captioning", export=True)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(image, return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)

```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForVision2Seq.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoImageProcessor, AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForVision2Seq
>>> from PIL import Image
>>> import requests


>>> processor = AutoImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> model = ORTModelForVision2Seq.from_pretrained("nlpconnect/vit-gpt2-image-captioning", export=True)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_to_text = pipeline("image-to-text", model=model, tokenizer=tokenizer, feature_extractor=processor, image_processor=processor)
>>> pred = image_to_text(image)
```

</ExampleCodeBlock>


</div></div>

### ORTModelForPix2Struct[[optimum.onnxruntime.ORTModelForPix2Struct]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForPix2Struct</name><anchor>optimum.onnxruntime.ORTModelForPix2Struct</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1527</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "encoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_session", "val": ": InferenceSession = None"}, {"name": "decoder_with_past_session", "val": ": InferenceSession | None = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "generation_config", "val": ": GenerationConfig | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
Pix2Struct model with a language modeling head for ONNX Runtime inference. This class officially supports pix2struct.
This model inherits from `~onnxruntime.modeling.ORTModelForConditionalGeneration`, check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the `onnxruntime.modeling.ORTModelForConditionalGeneration.from_pretrained` method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForPix2Struct.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling_seq2seq.py#L1540</source><parameters>[{"name": "flattened_patches", "val": ": torch.FloatTensor | None = None"}, {"name": "attention_mask", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "decoder_attention_mask", "val": ": torch.BoolTensor | None = None"}, {"name": "encoder_outputs", "val": ": BaseModelOutput | list[torch.FloatTensor] | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.Tensor]] | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **flattened_patches** (`torch.FloatTensor` of shape `(batch_size, seq_length, hidden_size)`) --
  Flattened pixel patches. the `hidden_size` is obtained by the following formula: `hidden_size` =
  `num_channels` * `patch_size` * `patch_size`
  The process of flattening the pixel patches is done by `Pix2StructProcessor`.
- **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) --
  Mask to avoid performing attention on padding token indices.
- **decoder_input_ids** (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*) --
  Indices of decoder input sequence tokens in the vocabulary.
  Pix2StructText uses the `pad_token_id` as the starting token for `decoder_input_ids` generation. If
  `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
  `past_key_values`).
- **decoder_attention_mask** (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*) --
  Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
  be used by default.
- **encoder_outputs** (`tuple(tuple(torch.FloatTensor)`, *optional*) --
  Tuple consists of (`last_hidden_state`, `optional`: *hidden_states*, `optional`: *attentions*)
  `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)` is a sequence of hidden states at
  the output of the last layer of the encoder. Used in the cross-attention of the decoder.
- **past_key_values** (`tuple(tuple(torch.FloatTensor), *optional*, defaults to `None`)` --
  Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
  The tuple is of length `config.n_layers` with each tuple having 2 tensors of shape
  `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)` and 2 additional tensors of shape
  `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForPix2Struct` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForPix2Struct.forward.example">

Example of pix2struct:

```python
>>> from transformers import AutoProcessor
>>> from optimum.onnxruntime import ORTModelForPix2Struct
>>> from PIL import Image
>>> import requests

>>> processor = AutoProcessor.from_pretrained("google/pix2struct-ai2d-base")
>>> model = ORTModelForPix2Struct.from_pretrained("google/pix2struct-ai2d-base", export=True, use_io_binding=True)

>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
>>> inputs = processor(images=image, text=question, return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = processor.batch_decode(gen_tokens, skip_special_tokens=True)
```

</ExampleCodeBlock>


</div></div>

## Custom Tasks

The following ORT classes are available for the following custom tasks.

#### ORTModelForCustomTasks[[optimum.onnxruntime.ORTModelForCustomTasks]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForCustomTasks</name><anchor>optimum.onnxruntime.ORTModelForCustomTasks</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L2019</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model for any custom tasks. It can be used to leverage the inference acceleration for any single-file ONNX model, that may use custom inputs and outputs.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForCustomTasks.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L2022</source><parameters>[{"name": "**model_inputs", "val": ": torch.Tensor | np.ndarray"}]</parameters></docstring>
The `ORTModelForCustomTasks` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForCustomTasks.forward.example">

Example of custom tasks(e.g. a sentence transformers taking `pooler_output` as output):

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCustomTasks

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> model = ORTModelForCustomTasks.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")

>>> inputs = tokenizer("I love burritos!", return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooler_output = outputs.pooler_output
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForCustomTasks.forward.example-2">

Example using `transformers.pipelines`(only if the task is supported):

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForCustomTasks

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> model = ORTModelForCustomTasks.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)

>>> text = "I love burritos!"
>>> pred = onnx_extractor(text)
```

</ExampleCodeBlock>


</div></div>

#### ORTModelForFeatureExtraction[[optimum.onnxruntime.ORTModelForFeatureExtraction]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTModelForFeatureExtraction</name><anchor>optimum.onnxruntime.ORTModelForFeatureExtraction</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L648</source><parameters>[{"name": "config", "val": ": PretrainedConfig = None"}, {"name": "session", "val": ": InferenceSession = None"}, {"name": "use_io_binding", "val": ": bool | None = None"}, {"name": "model_save_dir", "val": ": str | Path | TemporaryDirectory | None = None"}]</parameters></docstring>
ONNX Model for feature-extraction task.
This model inherits from [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel), check its documentation for the generic methods the
library implements for all its model (such as downloading or saving).

This class should be initialized using the [onnxruntime.modeling_ort.ORTModel.from_pretrained()](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel.from_pretrained) method.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>optimum.onnxruntime.ORTModelForFeatureExtraction.forward</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/modeling.py#L654</source><parameters>[{"name": "input_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "position_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "pixel_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "visual_embeds", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "visual_attention_mask", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "visual_token_type_ids", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "input_features", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "input_values", "val": ": torch.Tensor | np.ndarray | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **input_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Indices of input sequence tokens in the vocabulary.
  Indices can be obtained using [`AutoTokenizer`](https://huggingface.co/docs/transformers/autoclass_tutorial#autotokenizer).
  See [`PreTrainedTokenizer.encode`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.encode) and
  [`PreTrainedTokenizer.__call__`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizerBase.__call__) for details.
  [What are input IDs?](https://huggingface.co/docs/transformers/glossary#input-ids)
- **attention_mask** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
  [What are attention masks?](https://huggingface.co/docs/transformers/glossary#attention-mask)
- **token_type_ids** (`Union[torch.Tensor, np.ndarray, None]` of shape `(batch_size, sequence_length)`, defaults to `None`) --
  Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`:
  - 1 for tokens that are **sentence A**,
  - 0 for tokens that are **sentence B**.
  [What are token type IDs?](https://huggingface.co/docs/transformers/glossary#token-type-ids)</paramsdesc><paramgroups>0</paramgroups></docstring>
The `ORTModelForFeatureExtraction` forward method, overrides the `__call__` special method.

<Tip>

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

</Tip>



<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForFeatureExtraction.forward.example">

Example of feature extraction:

```python
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 12, 384]
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="optimum.onnxruntime.ORTModelForFeatureExtraction.forward.example-2">

Example using `transformers.pipeline`:

```python
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_extractor(text)
```

</ExampleCodeBlock>


</div></div>

### Quantization
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/quantization.md

# Quantization

## ORTQuantizer[[optimum.onnxruntime.ORTQuantizer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTQuantizer</name><anchor>optimum.onnxruntime.ORTQuantizer</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L85</source><parameters>[{"name": "onnx_model_path", "val": ": Path"}, {"name": "config", "val": ": PretrainedConfig | None = None"}]</parameters></docstring>
Handles the ONNX Runtime quantization process for models shared on huggingface.co/models.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compute_ranges</name><anchor>optimum.onnxruntime.ORTQuantizer.compute_ranges</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L261</source><parameters>[]</parameters><retdesc>The dictionary mapping the nodes name to their quantization ranges.</retdesc></docstring>
Computes the quantization ranges.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fit</name><anchor>optimum.onnxruntime.ORTQuantizer.fit</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L159</source><parameters>[{"name": "dataset", "val": ": Dataset"}, {"name": "calibration_config", "val": ": CalibrationConfig"}, {"name": "onnx_augmented_model_name", "val": ": str | Path = 'augmented_model.onnx'"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}, {"name": "batch_size", "val": ": int = 1"}, {"name": "use_external_data_format", "val": ": bool = False"}, {"name": "use_gpu", "val": ": bool = False"}, {"name": "force_symmetric_range", "val": ": bool = False"}]</parameters><paramsdesc>- **dataset** (`Dataset`) --
  The dataset to use when performing the calibration step.
- **calibration_config** (`~CalibrationConfig`) --
  The configuration containing the parameters related to the calibration step.
- **onnx_augmented_model_name** (`Union[str, Path]`, defaults to `"augmented_model.onnx"`) --
  The path used to save the augmented model used to collect the quantization ranges.
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  List of the operators types to quantize.
- **batch_size** (`int`, defaults to 1) --
  The batch size to use when collecting the quantization ranges values.
- **use_external_data_format** (`bool`, defaults to `False`) --
  Whether to use external data format to store model which size is >= 2Gb.
- **use_gpu** (`bool`, defaults to `False`) --
  Whether to use the GPU when collecting the quantization ranges values.
- **force_symmetric_range** (`bool`, defaults to `False`) --
  Whether to make the quantization ranges symmetric.</paramsdesc><paramgroups>0</paramgroups><retdesc>The dictionary mapping the nodes name to their quantization ranges.</retdesc></docstring>
Performs the calibration step and computes the quantization ranges.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>optimum.onnxruntime.ORTQuantizer.from_pretrained</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L111</source><parameters>[{"name": "model_or_path", "val": ": ORTModel | str | Path"}, {"name": "file_name", "val": ": str | None = None"}]</parameters><paramsdesc>- **model_or_path** (`Union[ORTModel, str, Path]`) --
  Can be either:
  - A path to a saved exported ONNX Intermediate Representation (IR) model, e.g., `./my_model_directory/.
  - Or an `ORTModelForXX` class, e.g., `ORTModelForQuestionAnswering`.
- **file_name(`Optional[str]`,** defaults to `None`) --
  Overwrites the default model file name from `"model.onnx"` to `file_name`.
  This allows you to load different model files from the same repository or directory.</paramsdesc><paramgroups>0</paramgroups><retdesc>An instance of `ORTQuantizer`.</retdesc></docstring>
Instantiates a `ORTQuantizer` from an ONNX model file or an `ORTModel`.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_calibration_dataset</name><anchor>optimum.onnxruntime.ORTQuantizer.get_calibration_dataset</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L416</source><parameters>[{"name": "dataset_name", "val": ": str"}, {"name": "num_samples", "val": ": int = 100"}, {"name": "dataset_config_name", "val": ": str | None = None"}, {"name": "dataset_split", "val": ": str | None = None"}, {"name": "preprocess_function", "val": ": Callable | None = None"}, {"name": "preprocess_batch", "val": ": bool = True"}, {"name": "seed", "val": ": int = 2016"}, {"name": "token", "val": ": bool | str | None = None"}]</parameters><paramsdesc>- **dataset_name** (`str`) --
  The dataset repository name on the Hugging Face Hub or path to a local directory containing data files
  to load to use for the calibration step.
- **num_samples** (`int`, defaults to 100) --
  The maximum number of samples composing the calibration dataset.
- **dataset_config_name** (`Optional[str]`, defaults to `None`) --
  The name of the dataset configuration.
- **dataset_split** (`Optional[str]`, defaults to `None`) --
  Which split of the dataset to use to perform the calibration step.
- **preprocess_function** (`Optional[Callable]`, defaults to `None`) --
  Processing function to apply to each example after loading dataset.
- **preprocess_batch** (`bool`, defaults to `True`) --
  Whether the `preprocess_function` should be batched.
- **seed** (`int`, defaults to 2016) --
  The random seed to use when shuffling the calibration dataset.
- **token** (`Optional[Union[bool,str]]`, defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).</paramsdesc><paramgroups>0</paramgroups><retdesc>The calibration `datasets.Dataset` to use for the post-training static quantization calibration
step.</retdesc></docstring>
Creates the calibration `datasets.Dataset` to use for the post-training static quantization calibration step.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>partial_fit</name><anchor>optimum.onnxruntime.ORTQuantizer.partial_fit</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L212</source><parameters>[{"name": "dataset", "val": ": Dataset"}, {"name": "calibration_config", "val": ": CalibrationConfig"}, {"name": "onnx_augmented_model_name", "val": ": str | Path = 'augmented_model.onnx'"}, {"name": "operators_to_quantize", "val": ": list[str] | None = None"}, {"name": "batch_size", "val": ": int = 1"}, {"name": "use_external_data_format", "val": ": bool = False"}, {"name": "use_gpu", "val": ": bool = False"}, {"name": "force_symmetric_range", "val": ": bool = False"}]</parameters><paramsdesc>- **dataset** (`Dataset`) --
  The dataset to use when performing the calibration step.
- **calibration_config** (`CalibrationConfig`) --
  The configuration containing the parameters related to the calibration step.
- **onnx_augmented_model_name** (`Union[str, Path]`, defaults to `"augmented_model.onnx"`) --
  The path used to save the augmented model used to collect the quantization ranges.
- **operators_to_quantize** (`Optional[List[str]]`, defaults to `None`) --
  List of the operators types to quantize.
- **batch_size** (`int`, defaults to 1) --
  The batch size to use when collecting the quantization ranges values.
- **use_external_data_format** (`bool`, defaults to `False`) --
  Whether uto se external data format to store model which size is >= 2Gb.
- **use_gpu** (`bool`, defaults to `False`) --
  Whether to use the GPU when collecting the quantization ranges values.
- **force_symmetric_range** (`bool`, defaults to `False`) --
  Whether to make the quantization ranges symmetric.</paramsdesc><paramgroups>0</paramgroups></docstring>
Performs the calibration step and collects the quantization ranges without computing them.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>quantize</name><anchor>optimum.onnxruntime.ORTQuantizer.quantize</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/quantization.py#L279</source><parameters>[{"name": "quantization_config", "val": ": QuantizationConfig"}, {"name": "save_dir", "val": ": str | Path"}, {"name": "file_suffix", "val": ": str | None = 'quantized'"}, {"name": "calibration_tensors_range", "val": ": dict[str, tuple[float, float]] | None = None"}, {"name": "use_external_data_format", "val": ": bool = False"}, {"name": "preprocessor", "val": ": QuantizationPreprocessor | None = None"}]</parameters><paramsdesc>- **quantization_config** (`QuantizationConfig`) --
  The configuration containing the parameters related to quantization.
- **save_dir** (`Union[str, Path]`) --
  The directory where the quantized model should be saved.
- **file_suffix** (`Optional[str]`, defaults to `"quantized"`) --
  The file_suffix used to save the quantized model.
- **calibration_tensors_range** (`Optional[Dict[str, Tuple[float, float]]]`, defaults to `None`) --
  The dictionary mapping the nodes name to their quantization ranges, used and required only when applying static quantization.
- **use_external_data_format** (`bool`, defaults to `False`) --
  Whether to use external data format to store model which size is >= 2Gb.
- **preprocessor** (`Optional[QuantizationPreprocessor]`, defaults to `None`) --
  The preprocessor to use to collect the nodes to include or exclude from quantization.</paramsdesc><paramgroups>0</paramgroups><retdesc>The path of the resulting quantized model.</retdesc></docstring>
Quantizes a model given the optimization specifications defined in `quantization_config`.






</div></div>

### ONNX Runtime Pipelines[[optimum.onnxruntime.pipeline]]
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/pipelines.md

# ONNX Runtime Pipelines[[optimum.onnxruntime.pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimum.onnxruntime.pipeline</name><anchor>optimum.onnxruntime.pipeline</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/pipelines.py#L150</source><parameters>[{"name": "task", "val": ": str | None = None"}, {"name": "model", "val": ": str | ORTModel | None = None"}, {"name": "config", "val": ": str | PretrainedConfig | None = None"}, {"name": "tokenizer", "val": ": str | PreTrainedTokenizer | PreTrainedTokenizerFast | None = None"}, {"name": "feature_extractor", "val": ": str | FeatureExtractionMixin | None = None"}, {"name": "image_processor", "val": ": str | BaseImageProcessor | None = None"}, {"name": "processor", "val": ": str | ProcessorMixin | None = None"}, {"name": "revision", "val": ": str | None = None"}, {"name": "use_fast", "val": ": bool = True"}, {"name": "token", "val": ": str | bool | None = None"}, {"name": "device", "val": ": int | str | torch.device | None = None"}, {"name": "trust_remote_code", "val": ": bool | None = None"}, {"name": "model_kwargs", "val": ": dict[str, Any] | None = None"}, {"name": "pipeline_class", "val": ": Any | None = None"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **task** (`str`) --
  The task defining which pipeline will be returned. Currently accepted tasks are:

  - `"audio-classification"`: will return a `AudioClassificationPipeline`.
  - `"automatic-speech-recognition"`: will return a `AutomaticSpeechRecognitionPipeline`.
  - `"depth-estimation"`: will return a `DepthEstimationPipeline`.
  - `"document-question-answering"`: will return a `DocumentQuestionAnsweringPipeline`.
  - `"feature-extraction"`: will return a `FeatureExtractionPipeline`.
  - `"fill-mask"`: will return a `FillMaskPipeline`:.
  - `"image-classification"`: will return a `ImageClassificationPipeline`.
  - `"image-feature-extraction"`: will return an `ImageFeatureExtractionPipeline`.
  - `"image-segmentation"`: will return a `ImageSegmentationPipeline`.
  - `"image-text-to-text"`: will return a `ImageTextToTextPipeline`.
  - `"image-to-image"`: will return a `ImageToImagePipeline`.
  - `"image-to-text"`: will return a `ImageToTextPipeline`.
  - `"mask-generation"`: will return a `MaskGenerationPipeline`.
  - `"object-detection"`: will return a `ObjectDetectionPipeline`.
  - `"question-answering"`: will return a `QuestionAnsweringPipeline`.
  - `"summarization"`: will return a `SummarizationPipeline`.
  - `"table-question-answering"`: will return a `TableQuestionAnsweringPipeline`.
  - `"text2text-generation"`: will return a `Text2TextGenerationPipeline`.
  - `"text-classification"` (alias `"sentiment-analysis"` available): will return a
    `TextClassificationPipeline`.
  - `"text-generation"`: will return a `TextGenerationPipeline`:.
  - `"text-to-audio"` (alias `"text-to-speech"` available): will return a `TextToAudioPipeline`:.
  - `"token-classification"` (alias `"ner"` available): will return a `TokenClassificationPipeline`.
  - `"translation"`: will return a `TranslationPipeline`.
  - `"translation_xx_to_yy"`: will return a `TranslationPipeline`.
  - `"video-classification"`: will return a `VideoClassificationPipeline`.
  - `"visual-question-answering"`: will return a `VisualQuestionAnsweringPipeline`.
  - `"zero-shot-classification"`: will return a `ZeroShotClassificationPipeline`.
  - `"zero-shot-image-classification"`: will return a `ZeroShotImageClassificationPipeline`.
  - `"zero-shot-audio-classification"`: will return a `ZeroShotAudioClassificationPipeline`.
  - `"zero-shot-object-detection"`: will return a `ZeroShotObjectDetectionPipeline`.

- **model** (`str` or `ORTModel`, *optional*) --
  The model that will be used by the pipeline to make predictions. This can be a model identifier or an
  actual instance of a ONNX Runtime model inheriting from `ORTModel`.

  If not provided, the default for the `task` will be loaded.
- **config** (`str` or `PretrainedConfig`, *optional*) --
  The configuration that will be used by the pipeline to instantiate the model. This can be a model
  identifier or an actual pretrained model configuration inheriting from `PretrainedConfig`.

  If not provided, the default configuration file for the requested model will be used. That means that if
  `model` is given, its default configuration will be used. However, if `model` is not supplied, this
  `task`'s default model's config is used instead.
- **tokenizer** (`str` or `PreTrainedTokenizer`, *optional*) --
  The tokenizer that will be used by the pipeline to encode data for the model. This can be a model
  identifier or an actual pretrained tokenizer inheriting from `PreTrainedTokenizer`.

  If not provided, the default tokenizer for the given `model` will be loaded (if it is a string). If `model`
  is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string).
  However, if `config` is also not given or not a string, then the default tokenizer for the given `task`
  will be loaded.
- **feature_extractor** (`str` or `PreTrainedFeatureExtractor`, *optional*) --
  The feature extractor that will be used by the pipeline to encode data for the model. This can be a model
  identifier or an actual pretrained feature extractor inheriting from `PreTrainedFeatureExtractor`.

  Feature extractors are used for non-NLP models, such as Speech or Vision models as well as multi-modal
  models. Multi-modal models will also require a tokenizer to be passed.

  If not provided, the default feature extractor for the given `model` will be loaded (if it is a string). If
  `model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it
  is a string). However, if `config` is also not given or not a string, then the default feature extractor
  for the given `task` will be loaded.
- **image_processor** (`str` or `BaseImageProcessor`, *optional*) --
  The image processor that will be used by the pipeline to preprocess images for the model. This can be a
  model identifier or an actual image processor inheriting from `BaseImageProcessor`.

  Image processors are used for Vision models and multi-modal models that require image inputs. Multi-modal
  models will also require a tokenizer to be passed.

  If not provided, the default image processor for the given `model` will be loaded (if it is a string). If
  `model` is not specified or not a string, then the default image processor for `config` is loaded (if it is
  a string).
- **processor** (`str` or `ProcessorMixin`, *optional*) --
  The processor that will be used by the pipeline to preprocess data for the model. This can be a model
  identifier or an actual processor inheriting from `ProcessorMixin`.

  Processors are used for multi-modal models that require multi-modal inputs, for example, a model that
  requires both text and image inputs.

  If not provided, the default processor for the given `model` will be loaded (if it is a string). If `model`
  is not specified or not a string, then the default processor for `config` is loaded (if it is a string).
- **revision** (`str`, *optional*, defaults to `"main"`) --
  When passing a task name or a string model identifier: The specific model version to use. It can be a
  branch name, a tag name, or a commit id, since we use a git-based system for storing models and other
  artifacts on huggingface.co, so `revision` can be any identifier allowed by git.
- **use_fast** (`bool`, *optional*, defaults to `True`) --
  Whether or not to use a Fast tokenizer if possible (a `PreTrainedTokenizerFast`).
- **use_auth_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `hf auth login` (stored in `~/.huggingface`).
- **device** (`int` or `str` or `torch.device`) --
  Defines the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which this
  pipeline will be allocated.
- **device_map** (`str` or `dict[str, Union[int, str, torch.device]`, *optional*) --
  Sent directly as `model_kwargs` (just a simpler shortcut). When `accelerate` library is present, set
  `device_map="auto"` to compute the most optimized `device_map` automatically (see
  [here](https://huggingface.co/docs/accelerate/main/en/package_reference/big_modeling#accelerate.cpu_offload)
  for more information).

  <Tip warning={true}>

  Do not use `device_map` AND `device` at the same time as they will conflict

  </Tip>

- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Sent directly as `model_kwargs` (just a simpler shortcut) to use the available precision for this model
  (`torch.float16`, `torch.bfloat16`, ... or `"auto"`).
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom code defined on the Hub in their own modeling, configuration,
  tokenization or even pipeline files. This option should only be set to `True` for repositories you trust
  and in which you have read the code, as it will execute code present on the Hub on your local machine.
- **model_kwargs** (`dict[str, Any]`, *optional*) --
  Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
  **model_kwargs)` function.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the specific pipeline init (see the documentation for the
  corresponding pipeline class for possible values).</paramsdesc><paramgroups>0</paramgroups><rettype>`Pipeline`</rettype><retdesc>A suitable pipeline for the task.</retdesc></docstring>
Utility factory method to build a `Pipeline` with an ONNX Runtime model, similar to `transformers.pipeline`.

A pipeline consists of:

- One or more components for pre-processing model inputs, such as a [tokenizer](tokenizer),
[image_processor](image_processor), [feature_extractor](feature_extractor), or [processor](processors).
- A [model](model) that generates predictions from the inputs.
- Optional post-processing steps to refine the model's output, which can also be handled by processors.

<Tip>
While there are such optional arguments as `tokenizer`, `feature_extractor`, `image_processor`, and `processor`,
they shouldn't be specified all at once. If these components are not provided, `pipeline` will try to load
required ones automatically. In case you want to provide these components explicitly, please refer to a
specific pipeline in order to get more details regarding what components are required.
</Tip>







<ExampleCodeBlock anchor="optimum.onnxruntime.pipeline.example">

Examples:
```python
>>> from optimum.onnxruntime import pipeline

>>> # Sentiment analysis pipeline
>>> analyzer = pipeline("sentiment-analysis")

>>> # Question answering pipeline, specifying the checkpoint identifier
>>> oracle = pipeline(
...     "question-answering", model="distilbert/distilbert-base-cased-distilled-squad", tokenizer="google-bert/bert-base-cased"
... )

>>> # Named entity recognition pipeline, passing in a specific model and tokenizer
>>> model = ORTModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
>>> recognizer = pipeline("ner", model=model, tokenizer=tokenizer)
```

</ExampleCodeBlock>


</div>

### Optimization
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/package_reference/optimization.md

# Optimization

## ORTOptimizer[[optimum.onnxruntime.ORTOptimizer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class optimum.onnxruntime.ORTOptimizer</name><anchor>optimum.onnxruntime.ORTOptimizer</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L49</source><parameters>[{"name": "onnx_model_path", "val": ": list[os.PathLike]"}, {"name": "config", "val": ": PretrainedConfig"}, {"name": "from_ortmodel", "val": ": bool = False"}]</parameters></docstring>
Handles the ONNX Runtime optimization process for models shared on huggingface.co/models.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>optimum.onnxruntime.ORTOptimizer.from_pretrained</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L80</source><parameters>[{"name": "model_or_path", "val": ": str | os.PathLike | ORTModel"}, {"name": "file_names", "val": ": list[str] | None = None"}]</parameters><paramsdesc>- **model_or_path** (`Union[str, os.PathLike, ORTModel]`) --
  The path to a local directory hosting the model to optimize or an instance of an `ORTModel` to quantize.
  Can be either:
  - A path to a local *directory* containing the model to optimize.
  - An instance of [ORTModel](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/modeling#optimum.onnxruntime.ORTModel).
- **file_names(`Optional[List[str]]`,** defaults to `None`) --
  The list of file names of the models to optimize.</paramsdesc><paramgroups>0</paramgroups></docstring>
Initializes the `ORTOptimizer` from a local directory or an `ORTModel`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_fused_operators</name><anchor>optimum.onnxruntime.ORTOptimizer.get_fused_operators</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L247</source><parameters>[{"name": "onnx_model_path", "val": ": str | os.PathLike"}]</parameters><paramsdesc>- **onnx_model_path** (`Union[str, os.PathLike]`) --
  Path of the ONNX model.</paramsdesc><paramgroups>0</paramgroups><retdesc>The dictionary mapping the name of the fused operators to their number of apparition in the model.</retdesc></docstring>
Computes the dictionary mapping the name of the fused operators to their number of apparition in the model.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_nodes_number_difference</name><anchor>optimum.onnxruntime.ORTOptimizer.get_nodes_number_difference</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L265</source><parameters>[{"name": "onnx_model_path", "val": ": str | os.PathLike"}, {"name": "onnx_optimized_model_path", "val": ": str | os.PathLike"}]</parameters><paramsdesc>- **onnx_model_path** (`Union[str, os.PathLike]`) --
  Path of the ONNX model.
- **onnx_optimized_model_path** (`Union[str, os.PathLike]`) --
  Path of the optimized ONNX model.</paramsdesc><paramgroups>0</paramgroups><retdesc>The difference in the number of nodes between the original and the optimized model.</retdesc></docstring>
Compute the difference in the number of nodes between the original and the optimized model.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_operators_difference</name><anchor>optimum.onnxruntime.ORTOptimizer.get_operators_difference</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L293</source><parameters>[{"name": "onnx_model_path", "val": ": str | os.PathLike"}, {"name": "onnx_optimized_model_path", "val": ": str | os.PathLike"}]</parameters><paramsdesc>- **onnx_model_path** (`Union[str, os.PathLike]`) --
  Path of the ONNX model.
- **onnx_optimized_model_path** (`Union[str, os.PathLike]`) --
  Path of the optimized ONNX model.</paramsdesc><paramgroups>0</paramgroups><retdesc>The dictionary mapping the operators name to the difference in the number of corresponding nodes between the
original and the optimized model.</retdesc></docstring>
Compute the dictionary mapping the operators name to the difference in the number of corresponding nodes between
the original and the optimized model.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>optimize</name><anchor>optimum.onnxruntime.ORTOptimizer.optimize</anchor><source>https://github.com/huggingface/optimum-onnx/blob/v0.0.1/optimum/onnxruntime/optimization.py#L128</source><parameters>[{"name": "optimization_config", "val": ": OptimizationConfig"}, {"name": "save_dir", "val": ": str | os.PathLike"}, {"name": "file_suffix", "val": ": str | None = 'optimized'"}, {"name": "one_external_file", "val": ": bool = True"}]</parameters><paramsdesc>- **optimization_config** ([OptimizationConfig](/docs/optimum/v0.0.1/en/onnxruntime/package_reference/configuration#optimum.onnxruntime.OptimizationConfig)) --
  The configuration containing the parameters related to optimization.
- **save_dir** (`Union[str, os.PathLike]`) --
  The path used to save the optimized model.
- **file_suffix** (`str`, defaults to `"optimized"`) --
  The file suffix used to save the optimized model.
- **one_external_file** (`bool`, defaults to `True`) --
  When `use_external_data_format=True`, whether to save all tensors to one external file.
  If False, save each tensor to a file named with the tensor name.</paramsdesc><paramgroups>0</paramgroups></docstring>
Optimizes a model given the optimization specifications defined in `optimization_config`.




</div></div>

### ONNX 🤝 ONNX Runtime
https://huggingface.co/docs/optimum/v0.0.1/onnxruntime/concept_guides/onnx.md

# ONNX 🤝 ONNX Runtime

ONNX is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an _intermediate representation_) that represents the flow of data through the neural network.

<Tip>

You can use [Netron](https://netron.app/) to visualize any ONNX file on the Hugging Face Hub. Simply append append the file's URL to `http://netron.app?url=` as in [this example](https://netron.app/?url=https://huggingface.co/cmarkea/distilcamembert-base-ner/blob/main/model.onnx)

</Tip>

By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa).

Where ONNX really shines is when it is coupled with a dedicated accelerator like ONNX Runtime, or ORT for short. ORT provides tools to optimize the ONNX graph through techniques like operator fusion and constant folding, and defines an interface to execution providers that allow you to run the model on different types of hardware.
