# Peft

## Docs

- [PEFT](https://huggingface.co/docs/peft/v0.18.0.rc0/index.md)
- [Quicktour](https://huggingface.co/docs/peft/v0.18.0.rc0/quicktour.md)
- [Installation](https://huggingface.co/docs/peft/v0.18.0.rc0/install.md)
- [IA3](https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/ia3.md)
- [LoRA methods](https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/lora_based_methods.md)
- [Prompt-based methods](https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/prompt_based_methods.md)
- [RandLora: Full-rank parameter-efficient fine-tuning of large models](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/randlora.md)
- [LoKr](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/lokr.md)
- [Tuners](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/tuners.md)
- [IA3](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/ia3.md)
- [PEFT types](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/peft_types.md)
- [P-tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/p_tuning.md)
- [FourierFT: Discrete Fourier Transformation Fine-Tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/fourierft.md)
- [C3A: Parameter-Efficient Fine-Tuning via Circular Convolution](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/c3a.md)
- [Model merge[[peft.utils.merge_utils.prune]]](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/merge_utils.md)
- [Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation (HRA)](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/hra.md)
- [VeRA: Vector-based Random Matrix Adaptation](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/vera.md)
- [Functions for PEFT integration](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/functional.md)
- [Hotswapping adapters](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/hotswap.md)
- [BOFT](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/boft.md)
- [Multitask prompt tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/multitask_prompt_tuning.md)
- [Helper methods](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/helpers.md)
- [MiSS](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/miss.md)
- [Prompt tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/prompt_tuning.md)
- [LoHa](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/loha.md)
- [WaveFT: Wavelet Fine-Tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/waveft.md)
- [AutoPeftModels](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/auto_class.md)
- [Prefix tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/prefix_tuning.md)
- [DeLoRA: Decoupled Low-rank Adaptation](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/delora.md)
- [RoAd](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/road.md)
- [OFT](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/oft.md)
- [Trainable Tokens](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/trainable_tokens.md)
- [Bone](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/bone.md)
- [X-LoRA](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/xlora.md)
- [Llama-Adapter](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/llama_adapter.md)
- [Models](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/peft_model.md)
- [LoRA](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/lora.md)
- [LyCORIS](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/adapter_utils.md)
- [AdaLoRA](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/adalora.md)
- [Configuration](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/config.md)
- [Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/cpt.md)
- [LayerNorm Tuning](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/layernorm_tuning.md)
- [Polytropon](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/poly.md)
- [OSF (Orthogonal Subspace Fine-tuning)](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/osf.md)
- [Sparse High Rank Adapters](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/shira.md)
- [VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks](https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/vblora.md)
- [IA3](https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/ia3.md)
- [Soft prompts](https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/prompting.md)
- [Orthogonal Finetuning (OFT and BOFT)](https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/oft.md)
- [Adapters](https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/adapter.md)
- [Adapter injection](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/low_level_api.md)
- [Custom models](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/custom_models.md)
- [Troubleshooting](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/troubleshooting.md)
- [Mixed adapter types](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/mixed_models.md)
- [Contribute to PEFT](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/contributing.md)
- [PEFT checkpoint format](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/checkpoint.md)
- [Quantization](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/quantization.md)
- [Model merging](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/model_merging.md)
- [LoRA](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/lora.md)
- [torch.compile](https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/torch_compile.md)
- [Fully Sharded Data Parallel](https://huggingface.co/docs/peft/v0.18.0.rc0/accelerate/fsdp.md)
- [DeepSpeed](https://huggingface.co/docs/peft/v0.18.0.rc0/accelerate/deepspeed.md)
- [PEFT configurations and models](https://huggingface.co/docs/peft/v0.18.0.rc0/tutorial/peft_model_config.md)
- [PEFT integrations](https://huggingface.co/docs/peft/v0.18.0.rc0/tutorial/peft_integrations.md)

### PEFT
https://huggingface.co/docs/peft/v0.18.0.rc0/index.md

# PEFT

🤗 PEFT (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting large pretrained models to various downstream applications without fine-tuning all of a model's parameters because it is prohibitively costly. PEFT methods only fine-tune a small number of (extra) model parameters - significantly decreasing computational and storage costs - while yielding performance comparable to a fully fine-tuned model. This makes it more accessible to train and store large language models (LLMs) on consumer hardware.

PEFT is integrated with the Transformers, Diffusers, and Accelerate libraries to provide a faster and easier way to load, train, and use large models for inference.

<div class="mt-10">
  <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="quicktour"
      ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Quicktour</div>
      <p class="text-gray-700">Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./task_guides/prompt_based_methods"
      ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
      <p class="text-gray-700">Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.</p>
    </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/adapter"
      ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
      <p class="text-gray-700">Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.</p>
   </a>
    <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/config"
      ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
      <p class="text-gray-700">Technical descriptions of how 🤗 PEFT classes and methods work.</p>
    </a>
  </div>
</div>

<iframe
	src="https://stevhliu-peft-methods.hf.space"
	frameborder="0"
	width="850"
	height="620"
></iframe>


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/index.md" />

### Quicktour
https://huggingface.co/docs/peft/v0.18.0.rc0/quicktour.md

# Quicktour

PEFT offers parameter-efficient methods for finetuning large pretrained models. The traditional paradigm is to finetune all of a model's parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters.

This quicktour will show you PEFT's main features and how you can train or run inference on large models that would typically be inaccessible on consumer devices.

## Train

Each PEFT method is defined by a [PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig) class that stores all the important parameters for building a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel). For example, to train with LoRA, load and create a [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) class and specify the following parameters:

- `task_type`: the task to train for (sequence-to-sequence language modeling in this case)
- `inference_mode`: whether you're using the model for inference or not
- `r`: the dimension of the low-rank matrices
- `lora_alpha`: the scaling factor for the low-rank matrices
- `lora_dropout`: the dropout probability of the LoRA layers

```python
from peft import LoraConfig, TaskType

peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
```

> [!TIP]
> See the [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) reference for more details about other parameters you can adjust, such as the modules to target or the bias type.

Once the [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) is setup, create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) with the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function. It takes a base model - which you can load from the Transformers library - and the [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) containing the parameters for how to configure a model for training with LoRA.

Load the base model you want to finetune.

```python
from transformers import AutoModelForSeq2SeqLM

model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
```

Wrap the base model and `peft_config` with the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel). To get a sense of the number of trainable parameters in your model, use the `print_trainable_parameters` method.

```python
from peft import get_peft_model

model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282"
```

Out of [bigscience/mt0-large's](https://huggingface.co/bigscience/mt0-large) 1.2B parameters, you're only training 0.19% of them!

That is it 🎉! Now you can train the model with the Transformers [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer), Accelerate, or any custom PyTorch training loop.

For example, to train with the [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) class, setup a [TrainingArguments](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.TrainingArguments) class with some training hyperparameters.

```py
training_args = TrainingArguments(
    output_dir="your-name/bigscience/mt0-large-lora",
    learning_rate=1e-3,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    num_train_epochs=2,
    weight_decay=0.01,
    eval_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
)
```

Pass the model, training arguments, dataset, tokenizer, and any other necessary component to the [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer), and call [train](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer.train) to start training.

```py
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["test"],
    processing_class=tokenizer,
    data_collator=data_collator,
    compute_metrics=compute_metrics,
)

trainer.train()
```

### Save model

After your model is finished training, you can save your model to a directory using the [save_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) function.

```py
model.save_pretrained("output_dir")
```

You can also save your model to the Hub (make sure you're logged in to your Hugging Face account first) with the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) function.

```python
from huggingface_hub import notebook_login

notebook_login()
model.push_to_hub("your-name/bigscience/mt0-large-lora")
```

Both methods only save the extra PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this [facebook/opt-350m](https://huggingface.co/ybelkada/opt-350m-lora) model trained with LoRA only contains two files: `adapter_config.json` and `adapter_model.safetensors`. The `adapter_model.safetensors` file is just 6.3MB!

<div class="flex flex-col justify-center">
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
  <figcaption class="text-center">The adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.</figcaption>
</div>

## Inference

> [!TIP]
> Take a look at the [AutoPeftModel](package_reference/auto_class) API reference for a complete list of available `AutoPeftModel` classes.

Easily load any PEFT-trained model for inference with the [AutoPeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel) class and the [from_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) method:

```py
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
import torch

model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")

model = model.to("cuda")
model.eval()
inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")

outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=50)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])

"Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla."
```

For other tasks that aren't explicitly supported with an `AutoPeftModelFor` class - such as automatic speech recognition - you can still use the base [AutoPeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel) class to load a model for the task.

```py
from peft import AutoPeftModel

model = AutoPeftModel.from_pretrained("smangrul/openai-whisper-large-v2-LORA-colab")
```

## Next steps

Now that you've seen how to train a model with one of the PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in the quicktour:

1. prepare a [PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig) for a PEFT method
2. use the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) method to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) from the configuration and base model

Then you can train it however you like! To load a PEFT model for inference, you can use the [AutoPeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel) class.

Feel free to also take a look at the task guides if you're interested in training a model with another PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, token classification, and more.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/quicktour.md" />

### Installation
https://huggingface.co/docs/peft/v0.18.0.rc0/install.md

# Installation

Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 PEFT. 🤗 PEFT is tested on **Python 3.9+**.

🤗 PEFT is available on PyPI, as well as GitHub:

## PyPI

To install 🤗 PEFT from PyPI:

```bash
pip install peft
```

## Source

New features that haven't been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository:

```bash
pip install git+https://github.com/huggingface/peft
```

If you're working on contributing to the library or wish to play with the source code and see live 
results as you run the code, an editable version can be installed from a locally-cloned version of the 
repository:

```bash
git clone https://github.com/huggingface/peft
cd peft
pip install -e .[test]
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/install.md" />

### IA3
https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/ia3.md

# IA3

[IA3](../conceptual_guides/ia3) multiplies the model's activations (the keys and values in the self-attention and encoder-decoder attention blocks, and the intermediate activation of the position-wise feedforward network) by three learned vectors. This PEFT method introduces an even smaller number of trainable parameters than LoRA which introduces weight matrices instead of vectors. The original model's parameters are kept frozen and only these vectors are updated. As a result, it is faster, cheaper and more efficient to finetune for a new downstream task.

This guide will show you how to train a sequence-to-sequence model with IA3 to *generate a sentiment* given some financial news.

> [!TIP]
> Some familiarity with the general process of training a sequence-to-sequence would be really helpful and allow you to focus on how to apply IA3. If you’re new, we recommend taking a look at the [Translation](https://huggingface.co/docs/transformers/tasks/translation) and [Summarization](https://huggingface.co/docs/transformers/tasks/summarization) guides first from the Transformers documentation. When you’re ready, come back and see how easy it is to drop PEFT in to your training!

## Dataset

You'll use the sentences_allagree subset of the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. This subset contains financial news with 100% annotator agreement on the sentiment label. Take a look at the [dataset viewer](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree) for a better idea of the data and sentences you'll be working with.

Load the dataset with the [load_dataset](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/loading_methods#datasets.load_dataset) function. This subset of the dataset only contains a train split, so use the `train_test_split` function to create a train and validation split. Create a new `text_label` column so it is easier to understand what the `label` values `0`, `1`, and `2` mean.

```py
from datasets import load_dataset

ds = load_dataset("financial_phrasebank", "sentences_allagree")
ds = ds["train"].train_test_split(test_size=0.1)
ds["validation"] = ds["test"]
del ds["test"]

classes = ds["train"].features["label"].names
ds = ds.map(
    lambda x: {"text_label": [classes[label] for label in x["label"]]},
    batched=True,
    num_proc=1,
)

ds["train"][0]
{'sentence': 'It will be operated by Nokia , and supported by its Nokia NetAct network and service management system .',
 'label': 1,
 'text_label': 'neutral'}
```

Load a tokenizer and create a preprocessing function that:

1. tokenizes the inputs, pads and truncates the sequence to the `max_length`
2. apply the same tokenizer to the labels but with a shorter `max_length` that corresponds to the label
3. mask the padding tokens

```py
from transformers import AutoTokenizer

text_column = "sentence"
label_column = "text_label"
max_length = 128

tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")

def preprocess_function(examples):
    inputs = examples[text_column]
    targets = examples[label_column]
    model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
    labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt")
    labels = labels["input_ids"]
    labels[labels == tokenizer.pad_token_id] = -100
    model_inputs["labels"] = labels
    return model_inputs
```

Use the [map](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/main_classes#datasets.Dataset.map) function to apply the preprocessing function to the entire dataset.

```py
processed_ds = ds.map(
    preprocess_function,
    batched=True,
    num_proc=1,
    remove_columns=ds["train"].column_names,
    load_from_cache_file=False,
    desc="Running tokenizer on dataset",
)
```

Create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), and set `pin_memory=True` to speed up data transfer to the accelerator during training if your dataset samples are on a CPU.

```py
from torch.utils.data import DataLoader
from transformers import default_data_collator

train_ds = processed_ds["train"]
eval_ds = processed_ds["validation"]

batch_size = 8

train_dataloader = DataLoader(
    train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
```

## Model

Now you can load a pretrained model to use as the base model for IA3. This guide uses the [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) model, but you can use any sequence-to-sequence model you like.

```py
from transformers import AutoModelForSeq2SeqLM

model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
```

### PEFT configuration and model

All PEFT methods need a configuration that contains and specifies all the parameters for how the PEFT method should be applied. Create an [IA3Config](/docs/peft/v0.18.0.rc0/en/package_reference/ia3#peft.IA3Config) with the task type and set the inference mode to `False`. You can find additional parameters for this configuration in the [API reference](../package_reference/ia3#ia3config).

> [!TIP]
> Call the [print_trainable_parameters()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.print_trainable_parameters) method to compare the number of trainable parameters of [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) versus the number of parameters in the base model!

Once the configuration is setup, pass it to the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function along with the base model to create a trainable [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel).

```py
from peft import IA3Config, get_peft_model

peft_config = IA3Config(task_type="SEQ_2_SEQ_LM")
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 282,624 || all params: 1,229,863,936 || trainable%: 0.022980103060766553"
```

### Training

Set up an optimizer and learning rate scheduler.

```py
import torch
from transformers import get_linear_schedule_with_warmup

lr = 8e-3
num_epochs = 3

optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
    optimizer=optimizer,
    num_warmup_steps=0,
    num_training_steps=(len(train_dataloader) * num_epochs),
)
```

Move the model to the accelerator and create a training loop that reports the loss and perplexity for each epoch.

```py
from tqdm import tqdm

device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
model = model.to(device)

for epoch in range(num_epochs):
    model.train()
    total_loss = 0
    for step, batch in enumerate(tqdm(train_dataloader)):
        batch = {k: v.to(device) for k, v in batch.items()}
        outputs = model(**batch)
        loss = outputs.loss
        total_loss += loss.detach().float()
        loss.backward()
        optimizer.step()
        lr_scheduler.step()
        optimizer.zero_grad()

    model.eval()
    eval_loss = 0
    eval_preds = []
    for step, batch in enumerate(tqdm(eval_dataloader)):
        batch = {k: v.to(device) for k, v in batch.items()}
        with torch.no_grad():
            outputs = model(**batch)
        loss = outputs.loss
        eval_loss += loss.detach().float()
        eval_preds.extend(
            tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
        )

    eval_epoch_loss = eval_loss / len(eval_dataloader)
    eval_ppl = torch.exp(eval_epoch_loss)
    train_epoch_loss = total_loss / len(train_dataloader)
    train_ppl = torch.exp(train_epoch_loss)
    print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
```

## Share your model

After training is complete, you can upload your model to the Hub with the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) method. You'll need to login to your Hugging Face account first and enter your token when prompted.

```py
from huggingface_hub import notebook_login

account = <your-hf-account-name>
peft_model_id = f"{account}/mt0-large-ia3"
model.push_to_hub(peft_model_id)
```

## Inference

To load the model for inference, use the [from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel.from_pretrained) method. Let's also load a sentence of financial news from the dataset to generate a sentiment for.

```py
from peft import AutoPeftModelForSeq2SeqLM

device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"

model = AutoPeftModelForSeq2SeqLM.from_pretrained("<your-hf-account-name>/mt0-large-ia3").to(device)
tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")

i = 15
inputs = tokenizer(ds["validation"][text_column][i], return_tensors="pt")
print(ds["validation"][text_column][i])
"The robust growth was the result of the inclusion of clothing chain Lindex in the Group in December 2007 ."
```

Call the [generate](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to generate the predicted sentiment label.

```py
with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
['positive']
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/task_guides/ia3.md" />

### LoRA methods
https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/lora_based_methods.md

# LoRA methods

A popular way to efficiently train large models is to insert (typically in the attention blocks) smaller trainable matrices that are a low-rank decomposition of the delta weight matrix to be learnt during finetuning. The pretrained model's original weight matrix is frozen and only the smaller matrices are updated during training. This reduces the number of trainable parameters, reducing memory usage and training time which can be very expensive for large models.

There are several different ways to express the weight matrix as a low-rank decomposition, but [Low-Rank Adaptation (LoRA)](../conceptual_guides/adapter#low-rank-adaptation-lora) is the most common method. The PEFT library supports several other LoRA variants, such as [Low-Rank Hadamard Product (LoHa)](../conceptual_guides/adapter#low-rank-hadamard-product-loha), [Low-Rank Kronecker Product (LoKr)](../conceptual_guides/adapter#low-rank-kronecker-product-lokr), and [Adaptive Low-Rank Adaptation (AdaLoRA)](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora). You can learn more about how these methods work conceptually in the [Adapters](../conceptual_guides/adapter) guide. If you're interested in applying these methods to other tasks and use cases like semantic segmentation, token classification, take a look at our [notebook collection](https://huggingface.co/collections/PEFT/notebooks-6573b28b33e5a4bf5b157fc1)!

Additionally, PEFT supports the [X-LoRA](../conceptual_guides/adapter#mixture-of-lora-experts-x-lora) Mixture of LoRA Experts method.

This guide will show you how to quickly train an image classification model - with a low-rank decomposition method - to identify the class of food shown in an image.

> [!TIP]
> Some familiarity with the general process of training an image classification model would be really helpful and allow you to focus on the low-rank decomposition methods. If you're new, we recommend taking a look at the [Image classification](https://huggingface.co/docs/transformers/tasks/image_classification) guide first from the Transformers documentation. When you're ready, come back and see how easy it is to drop PEFT in to your training!

Before you begin, make sure you have all the necessary libraries installed.

```bash
pip install -q peft transformers datasets
```

## Dataset

In this guide, you'll use the [Food-101](https://huggingface.co/datasets/food101) dataset which contains images of 101 food classes (take a look at the [dataset viewer](https://huggingface.co/datasets/food101/viewer/default/train) to get a better idea of what the dataset looks like).

Load the dataset with the [load_dataset](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/loading_methods#datasets.load_dataset) function.

```py
from datasets import load_dataset

ds = load_dataset("food101")
```

Each food class is labeled with an integer, so to make it easier to understand what these integers represent, you'll create a `label2id` and `id2label` dictionary to map the integer to its class label.

```py
labels = ds["train"].features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
    label2id[label] = i
    id2label[i] = label

id2label[2]
"baklava"
```

Load an image processor to properly resize and normalize the pixel values of the training and evaluation images.

```py
from transformers import AutoImageProcessor

image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
```

You can also use the image processor to prepare some transformation functions for data augmentation and pixel scaling.

```py
from torchvision.transforms import (
    CenterCrop,
    Compose,
    Normalize,
    RandomHorizontalFlip,
    RandomResizedCrop,
    Resize,
    ToTensor,
)

normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
train_transforms = Compose(
    [
        RandomResizedCrop(image_processor.size["height"]),
        RandomHorizontalFlip(),
        ToTensor(),
        normalize,
    ]
)

val_transforms = Compose(
    [
        Resize(image_processor.size["height"]),
        CenterCrop(image_processor.size["height"]),
        ToTensor(),
        normalize,
    ]
)

def preprocess_train(example_batch):
    example_batch["pixel_values"] = [train_transforms(image.convert("RGB")) for image in example_batch["image"]]
    return example_batch

def preprocess_val(example_batch):
    example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image in example_batch["image"]]
    return example_batch
```

Define the training and validation datasets, and use the [set_transform](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/main_classes#datasets.Dataset.set_transform) function to apply the transformations on-the-fly.

```py
train_ds = ds["train"]
val_ds = ds["validation"]

train_ds.set_transform(preprocess_train)
val_ds.set_transform(preprocess_val)
```

Finally, you'll need a data collator to create a batch of training and evaluation data and convert the labels to `torch.tensor` objects.

```py
import torch

def collate_fn(examples):
    pixel_values = torch.stack([example["pixel_values"] for example in examples])
    labels = torch.tensor([example["label"] for example in examples])
    return {"pixel_values": pixel_values, "labels": labels}
```

## Model

Now let's load a pretrained model to use as the base model. This guide uses the [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) model, but you can use any image classification model you want. Pass the `label2id` and `id2label` dictionaries to the model so it knows how to map the integer labels to their class labels, and you can optionally pass the `ignore_mismatched_sizes=True` parameter if you're finetuning a checkpoint that has already been finetuned.

```py
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer

model = AutoModelForImageClassification.from_pretrained(
    "google/vit-base-patch16-224-in21k",
    label2id=label2id,
    id2label=id2label,
    ignore_mismatched_sizes=True,
)
```

### PEFT configuration and model

Every PEFT method requires a configuration that holds all the parameters specifying how the PEFT method should be applied. Once the configuration is setup, pass it to the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function along with the base model to create a trainable [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel).

> [!TIP]
> Call the [print_trainable_parameters()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.print_trainable_parameters) method to compare the number of parameters of [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) versus the number of parameters in the base model!

<hfoptions id="loras">
<hfoption id="LoRA">

[LoRA](../conceptual_guides/adapter#low-rank-adaptation-lora) decomposes the weight update matrix into *two* smaller matrices. The size of these low-rank matrices is determined by its *rank* or `r`. A higher rank means the model has more parameters to train, but it also means the model has more learning capacity. You'll also want to specify the `target_modules` which determine where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `lora_alpha` (scaling factor), `bias` (whether `none`, `all` or only the LoRA bias parameters should be trained), and `modules_to_save` (the modules apart from the LoRA layers to be trained and saved). All of these parameters - and more - are found in the [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig).

```py
from peft import LoraConfig, get_peft_model

config = LoraConfig(
    r=16,
    lora_alpha=16,
    target_modules=["query", "value"],
    lora_dropout=0.1,
    bias="none",
    modules_to_save=["classifier"],
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 667,493 || all params: 86,543,818 || trainable%: 0.7712775047664294"
```

</hfoption>
<hfoption id="LoHa">

[LoHa](../conceptual_guides/adapter#low-rank-hadamard-product-loha) decomposes the weight update matrix into *four* smaller matrices and each pair of smaller matrices is combined with the Hadamard product. This allows the weight update matrix to keep the same number of trainable parameters when compared to LoRA, but with a higher rank (`r^2` for LoHA when compared to `2*r` for LoRA). The size of the smaller matrices is determined by its *rank* or `r`. You'll also want to specify the `target_modules` which determines where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `alpha` (scaling factor), and `modules_to_save` (the modules apart from the LoHa layers to be trained and saved). All of these parameters - and more - are found in the [LoHaConfig](/docs/peft/v0.18.0.rc0/en/package_reference/loha#peft.LoHaConfig).

```py
from peft import LoHaConfig, get_peft_model

config = LoHaConfig(
    r=16,
    alpha=16,
    target_modules=["query", "value"],
    module_dropout=0.1,
    modules_to_save=["classifier"],
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 1,257,317 || all params: 87,133,642 || trainable%: 1.4429753779831676"
```

</hfoption>
<hfoption id="LoKr">

[LoKr](../conceptual_guides/adapter#low-rank-kronecker-product-lokr) expresses the weight update matrix as a decomposition of a Kronecker product, creating a block matrix that is able to preserve the rank of the original weight matrix. The size of the smaller matrices are determined by its *rank* or `r`. You'll also want to specify the `target_modules` which determines where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `alpha` (scaling factor), and `modules_to_save` (the modules apart from the LoKr layers to be trained and saved). All of these parameters - and more - are found in the [LoKrConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lokr#peft.LoKrConfig).

```py
from peft import LoKrConfig, get_peft_model

config = LoKrConfig(
    r=16,
    alpha=16,
    target_modules=["query", "value"],
    module_dropout=0.1,
    modules_to_save=["classifier"],
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 116,069 || all params: 87,172,042 || trainable%: 0.13314934162033282"
```

</hfoption>
<hfoption id="AdaLoRA">

[AdaLoRA](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora) efficiently manages the LoRA parameter budget by assigning important weight matrices more parameters and pruning less important ones. In contrast, LoRA evenly distributes parameters across all modules. You can control the average desired *rank* or `r` of the matrices, and which modules to apply AdaLoRA to with `target_modules`. Other important parameters to set are `lora_alpha` (scaling factor), and `modules_to_save` (the modules apart from the AdaLoRA layers to be trained and saved). All of these parameters - and more - are found in the [AdaLoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/adalora#peft.AdaLoraConfig).

```py
from peft import AdaLoraConfig, get_peft_model

config = AdaLoraConfig(
    r=8,
    init_r=12,
    tinit=200,
    tfinal=1000,
    deltaT=10,
    target_modules=["query", "value"],
    modules_to_save=["classifier"],
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
"trainable params: 520,325 || all params: 87,614,722 || trainable%: 0.5938785036606062"
```

</hfoption>
</hfoptions>

### Training

For training, let's use the [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) class from Transformers. The `Trainer` contains a PyTorch training loop, and when you're ready, call [train](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer.train) to start training. To customize the training run, configure the training hyperparameters in the [TrainingArguments](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.TrainingArguments) class. With LoRA-like methods, you can afford to use a higher batch size and learning rate.

> [!WARNING]
> AdaLoRA has an [update_and_allocate()](/docs/peft/v0.18.0.rc0/en/package_reference/adalora#peft.AdaLoraModel.update_and_allocate) method that should be called at each training step to update the parameter budget and mask, otherwise the adaptation step is not performed. This requires writing a custom training loop or subclassing the [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) to incorporate this method. As an example, take a look at this [custom training loop](https://github.com/huggingface/peft/blob/912ad41e96e03652cabf47522cd876076f7a0c4f/examples/conditional_generation/peft_adalora_seq2seq.py#L120).

```py
from transformers import TrainingArguments, Trainer

account = "stevhliu"
peft_model_id = f"{account}/google/vit-base-patch16-224-in21k-lora"
batch_size = 128

args = TrainingArguments(
    peft_model_id,
    remove_unused_columns=False,
    eval_strategy="epoch",
    save_strategy="epoch",
    learning_rate=5e-3,
    per_device_train_batch_size=batch_size,
    gradient_accumulation_steps=4,
    per_device_eval_batch_size=batch_size,
    fp16=True,
    num_train_epochs=5,
    logging_steps=10,
    load_best_model_at_end=True,
    label_names=["labels"],
)
```

Begin training with [train](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer.train).

```py
trainer = Trainer(
    model,
    args,
    train_dataset=train_ds,
    eval_dataset=val_ds,
    processing_class=image_processor,
    data_collator=collate_fn,
)
trainer.train()
```

## Share your model

Once training is complete, you can upload your model to the Hub with the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) method. You’ll need to login to your Hugging Face account first and enter your token when prompted.

```py
from huggingface_hub import notebook_login

notebook_login()
```

Call [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) to save your model to your repositoy.

```py
model.push_to_hub(peft_model_id)
```

## Inference

Let's load the model from the Hub and test it out on a food image.

```py
from peft import PeftConfig, PeftModel
from transformers import AutoImageProcessor
from PIL import Image
import requests

config = PeftConfig.from_pretrained("stevhliu/vit-base-patch16-224-in21k-lora")
model = AutoModelForImageClassification.from_pretrained(
    config.base_model_name_or_path,
    label2id=label2id,
    id2label=id2label,
    ignore_mismatched_sizes=True,
)
model = PeftModel.from_pretrained(model, "stevhliu/vit-base-patch16-224-in21k-lora")

url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg">
</div>

Convert the image to RGB and return the underlying PyTorch tensors.

```py
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
```

Now run the model and return the predicted class!

```py
with torch.no_grad():
    outputs = model(**encoding)
    logits = outputs.logits

predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
"Predicted class: beignets"
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/task_guides/lora_based_methods.md" />

### Prompt-based methods
https://huggingface.co/docs/peft/v0.18.0.rc0/task_guides/prompt_based_methods.md

# Prompt-based methods

A prompt can describe a task or provide an example of a task you want the model to learn. Instead of manually creating these prompts, soft prompting methods add learnable parameters to the input embeddings that can be optimized for a specific task while keeping the pretrained model's parameters frozen. This makes it both faster and easier to finetune large language models (LLMs) for new downstream tasks.

The PEFT library supports several types of prompting methods (p-tuning, prefix tuning, prompt tuning) and you can learn more about how these methods work conceptually in the [Soft prompts](../conceptual_guides/prompting) guide. If you're interested in applying these methods to other tasks and use cases, take a look at our [notebook collection](https://huggingface.co/spaces/PEFT/soft-prompting)!

This guide will show you how to train a causal language model - with a soft prompting method - to *generate a classification* for whether a tweet is a complaint or not.

> [!TIP]
> Some familiarity with the general process of training a causal language model would be really helpful and allow you to focus on the soft prompting methods. If you're new, we recommend taking a look at the [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) guide first from the Transformers documentation. When you're ready, come back and see how easy it is to drop PEFT in to your training!

Before you begin, make sure you have all the necessary libraries installed.

```bash
pip install -q peft transformers datasets
```

## Dataset

For this guide, you'll use the `twitter_complaints` subset of the [RAFT](https://huggingface.co/datasets/ought/raft) dataset. The `twitter_complaints` subset contains tweets labeled as `complaint` and `no complaint` and you can check out the [dataset viewer](https://huggingface.co/datasets/ought/raft/viewer/twitter_complaints) for a better idea of what the data looks like.

Use the [load_dataset](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/loading_methods#datasets.load_dataset) function to load the dataset and create a new `text_label` column so it is easier to understand what the `Label` values, `1` and `2` mean.

```py
from datasets import load_dataset

ds = load_dataset(
    "parquet",
    data_files={
        "train": "hf://datasets/ought/raft@refs/convert/parquet/twitter_complaints/train/0000.parquet",
        "test": "hf://datasets/ought/raft@refs/convert/parquet/twitter_complaints/test/0000.parquet"
    }
)

classes = [k.replace("_", " ") for k in ds["train"].features["Label"].names]
ds = ds.map(
    lambda x: {"text_label": [classes[label] for label in x["Label"]]},
    batched=True,
    num_proc=1,
)
ds["train"][0]
{"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2, "text_label": "no complaint"}
```

Load a tokenizer, define the padding token to use, and determine the maximum length of the tokenized label.

```py
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
if tokenizer.pad_token_id is None:
    tokenizer.pad_token_id = tokenizer.eos_token_id
target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes])
print(target_max_length)
```

Create a preprocessing function that tokenizes the tweet text and labels, pad the inputs and labels in each batch, create an attention mask, and truncate sequences to the `max_length`. Then convert the `input_ids`, `attention_mask`, and `labels` to PyTorch tensors.

```py
import torch

max_length = 64

def preprocess_function(examples, text_column="Tweet text", label_column="text_label"):
    batch_size = len(examples[text_column])
    inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]]
    targets = [str(x) for x in examples[label_column]]
    model_inputs = tokenizer(inputs)
    labels = tokenizer(targets)
    classes = [k.replace("_", " ") for k in ds["train"].features["Label"].names]
    for i in range(batch_size):
        sample_input_ids = model_inputs["input_ids"][i]
        label_input_ids = labels["input_ids"][i]
        model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * (
            max_length - len(sample_input_ids)
        ) + sample_input_ids
        model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[
            "attention_mask"
        ][i]
        labels["input_ids"][i] = [-100] * (max_length - len(label_input_ids)) + label_input_ids
        model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length])
        model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length])
        labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length])
    model_inputs["labels"] = labels["input_ids"]
    return model_inputs
```

Apply the preprocessing function to the entire dataset with the [map](https://huggingface.co/docs/datasets/v4.3.0/en/package_reference/main_classes#datasets.Dataset.map) function, and remove the unprocessed columns because the model won't need them.

```py
processed_ds = ds.map(
    preprocess_function,
    batched=True,
    num_proc=1,
    remove_columns=ds["train"].column_names,
    load_from_cache_file=False,
    desc="Running tokenizer on dataset",
)
```

Finally, create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). You can set `pin_memory=True` to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU.

```py
from torch.utils.data import DataLoader
from transformers import default_data_collator

train_ds = processed_ds["train"]
eval_ds = processed_ds["test"]

batch_size = 16

train_dataloader = DataLoader(train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
```

## Model

Now let's load a pretrained model to use as the base model for the soft prompt method. This guide uses the [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) model, but you can use any causal language model you want.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
```

### PEFT configuration and model

For any PEFT method, you'll need to create a configuration which contains all the parameters that specify how the PEFT method should be applied. Once the configuration is setup, pass it to the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function along with the base model to create a trainable [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel).

> [!TIP]
> Call the [print_trainable_parameters()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.print_trainable_parameters) method to compare the number of trainable parameters of [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) versus the number of parameters in the base model!

<hfoptions id="configurations">
<hfoption id="p-tuning">

[P-tuning](../conceptual_guides/prompting#p-tuning) adds a trainable embedding tensor where the prompt tokens can be added anywhere in the input sequence. Create a [PromptEncoderConfig](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoderConfig) with the task type, the number of virtual tokens to add and learn, and the hidden size of the encoder for learning the prompt parameters.

```py
from peft import PromptEncoderConfig, get_peft_model

peft_config = PromptEncoderConfig(task_type="CAUSAL_LM", num_virtual_tokens=20, encoder_hidden_size=128)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 300,288 || all params: 559,514,880 || trainable%: 0.05366935013417338"
```

</hfoption>
<hfoption id="prefix tuning">

[Prefix tuning](../conceptual_guides/prompting#prefix-tuning) adds task-specific parameters in all of the model layers, which are optimized by a separate feed-forward network. Create a [PrefixTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/prefix_tuning#peft.PrefixTuningConfig) with the task type and number of virtual tokens to add and learn.

```py
from peft import PrefixTuningConfig, get_peft_model

peft_config = PrefixTuningConfig(task_type="CAUSAL_LM", num_virtual_tokens=20)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 983,040 || all params: 560,197,632 || trainable%: 0.1754809274167014"
```

</hfoption>
<hfoption id="prompt tuning">

[Prompt tuning](../conceptual_guides/prompting#prompt-tuning) formulates all tasks as a *generation* task and it adds a task-specific prompt to the input which is updated independently. The `prompt_tuning_init_text` parameter specifies how to finetune the model (in this case, it is classifying whether tweets are complaints or not). For the best results, the `prompt_tuning_init_text` should have the same number of tokens that should be predicted. To do this, you can set `num_virtual_tokens` to the number of tokens of the `prompt_tuning_init_text`.

Create a [PromptTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/prompt_tuning#peft.PromptTuningConfig) with the task type, the initial prompt tuning text to train the model with, the number of virtual tokens to add and learn, and a tokenizer.

```py
from peft import PromptTuningConfig, PromptTuningInit, get_peft_model

prompt_tuning_init_text = "Classify if the tweet is a complaint or no complaint.\n"
peft_config = PromptTuningConfig(
    task_type="CAUSAL_LM",
    prompt_tuning_init=PromptTuningInit.TEXT,
    num_virtual_tokens=len(tokenizer(prompt_tuning_init_text)["input_ids"]),
    prompt_tuning_init_text=prompt_tuning_init_text,
    tokenizer_name_or_path="bigscience/bloomz-560m",
)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 8,192 || all params: 559,222,784 || trainable%: 0.0014648902430985358"
```

</hfoption>
</hfoptions>

### Training

Set up an optimizer and learning rate scheduler.

```py
from transformers import get_linear_schedule_with_warmup

lr = 3e-2
num_epochs = 50

optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
    optimizer=optimizer,
    num_warmup_steps=0,
    num_training_steps=(len(train_dataloader) * num_epochs),
)
```

Move the model to the GPU and create a training loop that reports the loss and perplexity for each epoch.

```py
from tqdm import tqdm

device = "cuda"
model = model.to(device)

for epoch in range(num_epochs):
    model.train()
    total_loss = 0
    for step, batch in enumerate(tqdm(train_dataloader)):
        batch = {k: v.to(device) for k, v in batch.items()}
        outputs = model(**batch)
        loss = outputs.loss
        total_loss += loss.detach().float()
        loss.backward()
        optimizer.step()
        lr_scheduler.step()
        optimizer.zero_grad()

    model.eval()
    eval_loss = 0
    eval_preds = []
    for step, batch in enumerate(tqdm(eval_dataloader)):
        batch = {k: v.to(device) for k, v in batch.items()}
        with torch.no_grad():
            outputs = model(**batch)
        loss = outputs.loss
        eval_loss += loss.detach().float()
        eval_preds.extend(
            tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
        )

    eval_epoch_loss = eval_loss / len(eval_dataloader)
    eval_ppl = torch.exp(eval_epoch_loss)
    train_epoch_loss = total_loss / len(train_dataloader)
    train_ppl = torch.exp(train_epoch_loss)
    print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
```

## Share your model

Once training is complete, you can upload your model to the Hub with the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) method. You'll need to login to your Hugging Face account first and enter your token when prompted.

```py
from huggingface_hub import notebook_login

account = <your-hf-account-name>
peft_model_id = f"{account}/bloomz-560-m-peft-method"
model.push_to_hub(peft_model_id)
```

If you check the model file size in the repository, you’ll see that it is a lot smaller than a full sized model!

<div class="flex flex-col justify-center">
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
  <figcaption class="text-center">For example, the adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full model size which can be ~700MB.</figcaption>
</div>

## Inference

Let's load the model for inference and test it out on a tweet!

```py
from peft import AutoPeftModelForCausalLM

model = AutoPeftModelForCausalLM.from_pretrained("peft_model_id").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")

i = 15
inputs = tokenizer(f'{text_column} : {ds["test"][i]["Tweet text"]} Label : ', return_tensors="pt")
print(ds["test"][i]["Tweet text"])
"@NYTsupport i have complained a dozen times &amp; yet my papers are still thrown FAR from my door. Why is this so hard to resolve?"
```

Call the [generate](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to generate the predicted classification label.

```py
with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
"['Tweet text : @NYTsupport i have complained a dozen times &amp; yet my papers are still thrown FAR from my door. Why is this so hard to resolve? Label : complaint']"
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/task_guides/prompt_based_methods.md" />

### RandLora: Full-rank parameter-efficient fine-tuning of large models
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/randlora.md

# RandLora: Full-rank parameter-efficient fine-tuning of large models 
[RandLora](https://huggingface.co/papers/2502.00987) is a parameter-efficient fine-tuning technique that is similar to [LoRA](https://huggingface.co/papers/2106.09685) and [VeRA](https://huggingface.co/papers/2310.11454) but performs full rank updates to improve performance. RandLora can be particulary usefull when adapting large model to hard tasks that require complex updates while preserving the parameter efficiency of LoRA. The full rank update of RandLora is achieved by linearly scaling random bases. The random bases are a collection of multiple low rank matrices such that the summation of their ranks if greater or equal to the full rank of the parameter matrices. The trainable parameters of RandLora are two diagonal matrices (vectors) that get multiplied with the right hand low rank random bases, in a similar way to VeRA's update. To maintain low memory usage, RandLora uses a custom function that prevents storing unnecessary bases in memory for backpropagation.

RandLora presents the noteworthy difference that contrary to other LoRA-like PEFT algorithm, increasing RandLora's random base ranks increases the amount of trainable parameters. Because number of bases x bases rank is constant in RandLora, reducing the rank will increase the number of random bases, hence the number of base-specific trainable diagonal bases.

Because reducing the rank of RandLora's random bases will increase their number, RandLora can become slower to train than LoRA for very small ranks where typically, ranks below 4 with result in a large training time increase. This does not affect inference though as the RandLora adapters can be merged into the pretrained weight matrices.

RandLora additionally supports training with sparse, ternary random bases (only containing -1, 0 and 1). These bases are as described in [Bingham et al.](https://cs-people.bu.edu/evimaria/cs565/kdd-rp.pdf) and [Ping et al.](https://hastie.su.domains/Papers/Ping/KDD06_rp.pdf) and could theoretically be used to reduce compute needs by performing aggregations instead of matrix multiplications to create the weight update. This is not currently supported. Although it does not currently reduce compute, using sparse random bases in RandLora can reduce overfitting in some cases. For users intersted in using sparse ternary bases, the `sparse` option is recommended over the `very_sparse` one that can reduce perfromance. 

Similarly to VeRA, when saving the RandLora's parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).

As in Vera and to handle different shapes of adapted layers, RandLora initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.

RandLora currently has the following constraint:

- Only `nn.Linear` layers are supported.

The abstract from the paper is:

> Low-Rank Adaptation (LoRA) and its variants have shown impressive results in reducing the number of trainable parameters and memory requirements of large transformer networks while maintaining fine-tuning performance. The low-rank nature of the weight update inherently limits the representation power of fine-tuned models, however, thus potentially compromising performance on complex tasks. This raises a critical question: when a performance gap between LoRA and standard fine-tuning is observed, is it due to the reduced number of trainable parameters or the rank deficiency?
This paper aims to answer this question by introducing RandLora, a parameter-efficient method that performs full-rank updates using a learned linear combinations of low-rank, non-trainable random matrices. Our method limits the number of trainable parameters by restricting optimization to diagonal scaling matrices applied to the fixed random matrices. This allows us to effectively overcome the low-rank limitations while maintaining parameter and memory efficiency during training. Through extensive experimentation across vision, language, and vision-language benchmarks, we systematically evaluate the limitations of LoRA and existing random basis methods. Our findings reveal that full-rank updates are beneficial across vision and language tasks individually, and even more so for vision-language tasks, where RandLora significantly reduces---and sometimes eliminates---the performance gap between standard fine-tuning and LoRA, demonstrating its efficacy.

## RandLoraConfig[[peft.RandLoraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.RandLoraConfig</name><anchor>peft.RandLoraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/randlora/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 32"}, {"name": "target_modules", "val": ": typing.Union[list[str], str, NoneType] = None"}, {"name": "projection_prng_key", "val": ": int = 0"}, {"name": "save_projection", "val": ": bool = True"}, {"name": "sparse", "val": ": bool = False"}, {"name": "very_sparse", "val": ": bool = False"}, {"name": "randlora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "randlora_alpha", "val": ": int = 640"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": typing.Optional[list[str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": typing.Union[list[int], int, NoneType] = None"}, {"name": "layers_pattern", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **r** (`int`, *optional*, defaults to `32`) --
  RandLora's random basis rank dimension. Contrary to Lora, this parameter is inversely proportional to the
  amount of trainable parameters as reducing it increases trainable parameters.
- **target_modules** (`Union[list[str], str]`) --
  The names of the modules to apply RandLora to. Only linear layers are supported.
- **projection_prng_key** (`int`) --
  RandLora PRNG init key. Used for initialising basis_A and basis_B for new models or when loading a
  checkpoint that did not include these projections. Defaults to `0`.
- **save_projection** (`bool`) --
  Whether to save the global basis_A / basis_B random basis in the state dict alongside per layer lambda /
  gamma diagonal matrices. This will increase the size of the checkpoint, but guarantee that we can reload
  the checkpoint on all system configurations. Defaults to `True`.
- **sparse** (`bool`) --
  Whether to use sparse random bases as described in the RandLora paper. The bases are ternary sparse bases
  (only containing -1, 0 and 1) where the attribution probability is 1/6 for -1 and 1 and 2/3 for 0. These
  sparse matrices aim to be used for matmul free computation in the future, see
  https://huggingface.co/papers/2406.02528v1 The current implementation is a proof of concept however where
  the sparseness is not used to improve speed or memory usage. Using sparse matrices typically does not
  reduce performance and can even help reduce overfitting. Defaults to `False`.
- **very_sparse** (`bool`) --
  Whether to use highly sparse random bases as described in the RandLora paper. The very sparse bases are
  ternary sparse bases (only containing -1, 0 and 1) given a matrix with smallest dimension d, the
  attribution probability is 1/√D for -1 and 1 and 1- 2/√D for 0. Using these sparse matrices can further
  reduce overfitting over the `sparse` alternatives but will most likely decrease performance as a results.
  Use carefully. Defaults to `False`.
- **randlora_dropout** (`float`) --
  The dropout probability for RandLora layers.
- **randlora_alpha** (`float`) --
  The scaling coefficient for RandLora layers, this would typically be 20 times the rank. Because the
  `randlora_alpha` coefficient is large by default, it can lead to numerical instabilities especially when
  learning rates are high. If training is unstable, consider reducing the learning rate or the
  `randlora_alpha` coefficient.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **bias** (`str`) --
  Bias type. Can be 'none', 'all' or 'randlora_only'. If 'all' or 'randlora_only', the corresponding biases
  will be updated during training. Be aware that this means that, even when disabling the adapters, the model
  will not produce the same output as the base model would have without adaptation.
- **modules_to_save** (`list[str]`) --
  list of modules apart from RandLora layers to be set as trainable and saved in the final checkpoint.
- **init_weights** (`bool`) --
  Whether to initialize the weights of the RandLora layers with their default initialization. Don't change
  this setting, except if you know exactly what you're doing.
- **layers_to_transform** (`Union[list[int],int]`) --
  The layer indexes to transform, if this argument is specified, it will apply the RandLora transformations
  on the layer indexes that are specified in this list. If a single integer is passed, it will apply the
  RandLora transformations on the layer at this index.
- **layers_pattern** (`str`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None` and if the layer
  pattern is not in the common layers pattern.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [RandLoraModel](/docs/peft/v0.18.0.rc0/en/package_reference/randlora#peft.RandLoraModel).

Paper: https://huggingface.co/papers/2502.00987.




</div>

## RandLoraModel[[peft.RandLoraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.RandLoraModel</name><anchor>peft.RandLoraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/randlora/model.py#L67</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **config** ([RandLoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/randlora#peft.RandLoraConfig)) -- The configuration of the RandLora model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The RandLora model.</retdesc></docstring>

Creates a RandLoRA model from a pretrained transformers model.







<ExampleCodeBlock anchor="peft.RandLoraModel.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import RandLoraConfig, get_peft_model

>>> base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> config = RandLoraConfig(r=32)
>>> model = get_peft_model(base_model, config)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([RandLoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/randlora#peft.RandLoraConfig)): The configuration of the RandLora model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/randlora.md" />

### LoKr
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/lokr.md

# LoKr

Low-Rank Kronecker Product ([LoKr](https://hf.co/papers/2309.14859)), is a LoRA-variant method that approximates the large weight matrix with two low-rank matrices and combines them with the Kronecker product. LoKr also provides an optional third low-rank matrix to provide better control during fine-tuning.

## LoKrConfig[[peft.LoKrConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoKrConfig</name><anchor>peft.LoKrConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lokr/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "rank_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "alpha_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "r", "val": ": int = 8"}, {"name": "alpha", "val": ": int = 8"}, {"name": "rank_dropout", "val": ": float = 0.0"}, {"name": "module_dropout", "val": ": float = 0.0"}, {"name": "use_effective_conv2d", "val": ": bool = False"}, {"name": "decompose_both", "val": ": bool = False"}, {"name": "decompose_factor", "val": ": int = -1"}, {"name": "rank_dropout_scale", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": Union[bool, Literal['lycoris']] = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  LoKr rank.
- **alpha** (`int`) --
  The alpha parameter for LoKr scaling.
- **rank_dropout** (`float`) --
  The dropout probability for rank dimension during training.
- **module_dropout** (`float`) --
  The dropout probability for disabling LoKr modules during training.
- **use_effective_conv2d** (`bool`) --
  Use parameter effective decomposition for Conv2d (and Conv1d) with ksize > 1 ("Proposition 3" from FedPara
  paper).
- **decompose_both** (`bool`) --
  Perform rank decomposition of left kronecker product matrix.
- **decompose_factor** (`int`) --
  Kronecker product decomposition factor.
- **rank_dropout_scale** ('bool) --
  Whether to scale the rank dropout while training, defaults to `False`.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen,
  excluding the output layer. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (`bool`) --
  Whether to perform initialization of adapter weights. This defaults to `True`. Use "lycoris" to initialize
  weights in the style of the LYCORIS repository. Passing `False` is discouraged.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **rank_pattern** (`dict`) --
  The mapping from layer names or regexp expression to ranks which are different from the default rank
  specified by `r`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **alpha_pattern** (`dict`) --
  The mapping from layer names or regexp expression to alphas which are different from the default alpha
  specified by `alpha`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **modules_to_save** (`Optional[List[str]]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration class of [LoKrModel](/docs/peft/v0.18.0.rc0/en/package_reference/lokr#peft.LoKrModel).




</div>

## LoKrModel[[peft.LoKrModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoKrModel</name><anchor>peft.LoKrModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lokr/model.py#L27</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([LoKrConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lokr#peft.LoKrConfig)) -- The configuration of the LoKr model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The LoKr model.</retdesc></docstring>

Creates Low-Rank Kronecker Product model from a pretrained model. The original method is partially described in
https://huggingface.co/papers/2108.06098 and in https://huggingface.co/papers/2309.14859 Current implementation
heavily borrows from
https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/lokr.py







<ExampleCodeBlock anchor="peft.LoKrModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import LoKrModel, LoKrConfig

>>> config_te = LoKrConfig(
...     r=8,
...     lora_alpha=32,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     rank_dropout=0.0,
...     module_dropout=0.0,
...     init_weights=True,
... )
>>> config_unet = LoKrConfig(
...     r=8,
...     lora_alpha=32,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     rank_dropout=0.0,
...     module_dropout=0.0,
...     init_weights=True,
...     use_effective_conv2d=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = LoKrModel(model.text_encoder, config_te, "default")
>>> model.unet = LoKrModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([LoKrConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lokr#peft.LoKrConfig)): The configuration of the LoKr model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/lokr.md" />

### Tuners
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/tuners.md

# Tuners

A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. `BaseTuner` base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. `BaseTunerLayer` is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters.

## BaseTuner[[peft.tuners.tuners_utils.BaseTuner]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.tuners_utils.BaseTuner</name><anchor>peft.tuners.tuners_utils.BaseTuner</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L212</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to which the adapter tuner layers will be attached.
- **forward** (`Callable`) --
  The forward method of the model.
- **peft_config** (`Union[`PeftConfig`, dict[str, PeftConfig]]`) --
  The adapter configuration object, it should be a dictionary of `str` to `PeftConfig` objects. One can also
  pass a PeftConfig object and a new adapter will be created with the default name `adapter` or create a new
  dictionary with a key `adapter_name` and a value of that peft config.
- **config** (`dict[str, Any]`) --
  The model configuration object, it should be a dictionary of `str` to `Any` objects.
- **targeted_module_names** (`list[str]`) --
  The list of module names that were actually adapted. Can be useful to inspect if you want to quickly
  double-check that the `config.target_modules` were specified correctly.
- **targeted_parameter_names** (`list[str]`) --
  The list of parameter names that were actually adapted. Can be useful to inspect if you want to quickly
  double-check that the `config.target_parameters` were specified correctly.
- **prefix** (`str`) --
  The PEFT-method specific unique prefix. E.g. `"lora_"` for LoRA.</paramsdesc><paramgroups>0</paramgroups></docstring>

A base tuner model that provides the common methods and attributes for all tuners that are injectable into a
torch.nn.Module

For adding a new Tuner class, one needs to overwrite the following methods:

- **_prepare_adapter_config**:
  A private method to eventually prepare the adapter config, for example in case the field `target_modules` is
  missing.
- **_create_and_replace**:
  A private method to create and replace the target module with the adapter module.
- **_check_target_module_exists**:
  A private helper method to check if the passed module's key name matches any of the target modules in the
  adapter_config.

The easiest is to check what is done in the `peft.tuners.lora.LoraModel` class.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapter</name><anchor>peft.tuners.tuners_utils.BaseTuner.delete_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L472</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters><paramsdesc>- **adapter_name** (str) -- Name of the adapter to be deleted.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes an existing adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_adapter_layers</name><anchor>peft.tuners.tuners_utils.BaseTuner.disable_adapter_layers</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L448</source><parameters>[]</parameters></docstring>

Disable all adapters in-place.

When disabling all adapters, the model output corresponds to the output of the base model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_adapter_layers</name><anchor>peft.tuners.tuners_utils.BaseTuner.enable_adapter_layers</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L465</source><parameters>[]</parameters></docstring>

Enable all adapters in-place


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_config</name><anchor>peft.tuners.tuners_utils.BaseTuner.get_model_config</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1142</source><parameters>[{"name": "model", "val": ": nn.Module"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  Model to get the config from.
- **default** (`dict|None`, *optional*) --:
  What to return if model does not have a config attribute.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method gets the config from a model in dictionary form. If model has not attribute config, then this
method returns a default config.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>inject_adapter</name><anchor>peft.tuners.tuners_utils.BaseTuner.inject_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L664</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "adapter_name", "val": ": str"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model to be tuned.
- **adapter_name** (`str`) --
  The adapter name.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.
- **state_dict** (`dict`, *optional*, defaults to `None`) --
  If a state_dict is passed here, the adapters will be injected based on the entries of the state_dict.
  This can be useful when the exact `target_modules` of the PEFT method is unknown, for instance because
  the checkpoint was created without meta data. Note that the values from the state_dict are not used,
  only the keys are used to determine the correct layers that should be adapted.</paramsdesc><paramgroups>0</paramgroups></docstring>

Creates adapter layers and replaces the target modules with the adapter layers. This method is called under the
hood by `peft.mapping.get_peft_model` if a non-prompt tuning adapter class is passed.

The corresponding PEFT config is directly retrieved from the `peft_config` attribute of the BaseTuner class.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge_adapter</name><anchor>peft.tuners.tuners_utils.BaseTuner.merge_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1075</source><parameters>[{"name": "adapter_names", "val": ": Optional[list[str]] = None"}, {"name": "safe_merge", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_names** (`list[str]`, *optional*) --
  The list of adapter names that should be merged. If `None`, all active adapters will be merged.
  Defaults to `None`.
- **safe_merge** (`bool`, *optional*) --
  If `True`, the merge operation will be performed in a copy of the original weights and check for NaNs
  before merging the weights. This is useful if you want to check if the merge operation will produce
  NaNs. Defaults to `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method merges the adapter layers into the base model.

Merging adapters can lead to a speed up of the forward pass. A copy of the adapter weights is still kept in
memory, which is required to unmerge the adapters. In order to merge the adapter weights without keeping them
in memory, please call `merge_and_unload`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge_and_unload</name><anchor>peft.tuners.tuners_utils.BaseTuner.merge_and_unload</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L607</source><parameters>[{"name": "progressbar", "val": ": bool = False"}, {"name": "safe_merge", "val": ": bool = False"}, {"name": "adapter_names", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **progressbar** (`bool`) --
  whether to show a progressbar indicating the unload and merge process (default: False).
- **safe_merge** (`bool`) --
  whether to activate the safe merging check to check if there is any potential Nan in the adapter
  weights.
- **adapter_names** (`List[str]`, *optional*) --
  The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults
  to `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method merges the adapter layers into the base model.

This is needed if someone wants to use the base model as a standalone model. The returned model has the same
architecture as the original base model.

It is important to assign the returned model to a variable and use it, this is not an in-place operation!



<ExampleCodeBlock anchor="peft.tuners.tuners_utils.BaseTuner.merge_and_unload.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModel

>>> model_id = ...
>>> base_model = AutoModelForCausalLM.from_pretrained(model_id)
>>> peft_model_id = ...
>>> model = PeftModel.from_pretrained(base_model, peft_model_id)
>>> merged_model = model.merge_and_unload()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>peft.tuners.tuners_utils.BaseTuner.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1128</source><parameters>[{"name": "adapter_name", "val": ": str | list[str]"}, {"name": "inference_mode", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_name** (str, list[str]) --
  The name(s) of the adapter(s) to set as active
- **inference_mode** (bool, optional) --
  Whether the activated adapter should be frozen (i.e. `requires_grad=False`). Default is False.</paramsdesc><paramgroups>0</paramgroups></docstring>
Set the active adapter(s).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_auxiliary_adapters</name><anchor>peft.tuners.tuners_utils.BaseTuner.set_auxiliary_adapters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1113</source><parameters>[{"name": "adapter_name", "val": ": str | list[str]"}, {"name": "inference_mode", "val": ": bool"}]</parameters><paramsdesc>- **adapter_name** (`str` or `list[str]`) --
  The name(s) of the adapter(s) to be set as active. The adapters must be loaded first.
- **inference_mode** (bool, optional) --
  Whether the activated adapter should be frozen (i.e. `requires_grad=False`). Default is False.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the active adapter(s) on auxiliary modules.

If the subclass (e.g. `LoraModel`) supports auxiliary modules like `modules_to_save`, it should call this
method in `set_adapter` to ensure that those auxiliary modules are being set correctly.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_requires_grad</name><anchor>peft.tuners.tuners_utils.BaseTuner.set_requires_grad</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L488</source><parameters>[{"name": "adapter_names", "val": ": str | Sequence[str]"}, {"name": "requires_grad", "val": ": bool = True"}]</parameters><paramsdesc>- **adapter_name** (`str` or `Sequence[str]`) --
  The name of the adapter(s) whose gradients should be enabled/disabled.
- **requires_grad** (`bool`, *optional*) --
  Whether to enable (`True`, default) or disable (`False`).</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable or disable gradients on the given adapter(s).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload</name><anchor>peft.tuners.tuners_utils.BaseTuner.unload</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L645</source><parameters>[]</parameters></docstring>

Return the base model by removing all the PEFT modules.

It is important to assign the returned model to a variable and use it, this is not an in-place operation!


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unmerge_adapter</name><anchor>peft.tuners.tuners_utils.BaseTuner.unmerge_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1104</source><parameters>[]</parameters></docstring>

This method unmerges all merged adapter layers from the base model.


</div></div>

## BaseTunerLayer[[peft.tuners.tuners_utils.BaseTunerLayer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.tuners_utils.BaseTunerLayer</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1263</source><parameters>[]</parameters><paramsdesc>- **is_pluggable** (`bool`, *optional*) --
  Whether the adapter layer can be plugged to any pytorch module
- **active_adapters** (Union[List`str`, `str`], *optional*) --
  The name of the active adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>

A tuner layer mixin that provides the common methods and attributes for all tuners.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapter</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer.delete_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1452</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters><paramsdesc>- **adapter_name** (`str`) -- The name of the adapter to delete</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete an adapter from the layer

This should be called on all adapter layers, or else we will get an inconsistent state.

This method will also set a new active adapter if the deleted adapter was an active adapter. It is important
that the new adapter is chosen in a deterministic way, so that the same adapter is chosen on all layers.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_adapters</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer.enable_adapters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1395</source><parameters>[{"name": "enabled", "val": ": bool"}]</parameters><paramsdesc>- **enabled** (bool) -- True to enable adapters, False to disable adapters</paramsdesc><paramgroups>0</paramgroups></docstring>
Toggle the enabling and disabling of adapters

Takes care of setting the requires_grad flag for the adapter weights.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_base_layer</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer.get_base_layer</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1288</source><parameters>[]</parameters></docstring>

(Recursively) get the base_layer.

This is necessary for the case that the tuner layer wraps another tuner layer.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1413</source><parameters>[{"name": "adapter_names", "val": ": str | list[str]"}, {"name": "inference_mode", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_name** (`str` or `list[str]`) --
  The name(s) of the adapter(s) to set as active.
- **inference_mode** (bool, optional) --
  Whether the activated adapter should be frozen (i.e. `requires_grad=False`). Default is False.</paramsdesc><paramgroups>0</paramgroups></docstring>
Set the active adapter(s).

Additionally, this function will set the specified adapter to trainable (i.e., requires_grad=True) unless
inference_mode is True.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_requires_grad</name><anchor>peft.tuners.tuners_utils.BaseTunerLayer.set_requires_grad</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1489</source><parameters>[{"name": "adapter_names", "val": ": str | Sequence[str]"}, {"name": "requires_grad", "val": ": bool = True"}]</parameters><paramsdesc>- **adapter_name** (`str` or `Sequence[str]`) --
  The name of the adapter(s) whose gradients should be enabled/disabled.
- **requires_grad** (`bool`, *optional*) --
  Whether to enable (`True`, default) or disable (`False`).</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable or disable gradients on the given adapter(s).




</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/tuners.md" />

### IA3
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/ia3.md

# IA3

Infused Adapter by Inhibiting and Amplifying Inner Activations, or [IA3](https://hf.co/papers/2205.05638), is a method that adds three learned vectors to rescale the keys and values of the self-attention and encoder-decoder attention layers, and the intermediate activation of the position-wise feed-forward network.

The abstract from the paper is:

*Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available*.

## IA3Config[[peft.IA3Config]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.IA3Config</name><anchor>peft.IA3Config</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/ia3/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "feedforward_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_ia3_weights", "val": ": bool = True"}]</parameters><paramsdesc>- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen,
  excluding the output layer. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **feedforward_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to be treated as feedforward modules, as in the original paper. These modules will
  have (IA)³ vectors multiplied to the input, instead of the output. `feedforward_modules` must be a name or
  a subset of names present in `target_modules`.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **modules_to_save** (`Optional[List[str]]`) --
  List of modules apart from (IA)³ layers to be set as trainable and saved in the final checkpoint.
- **init_ia3_weights** (`bool`) --
  Whether to initialize the vectors in the (IA)³ layers, defaults to `True`. Setting this to `False` is
  discouraged.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [IA3Model](/docs/peft/v0.18.0.rc0/en/package_reference/ia3#peft.IA3Model).




</div>

## IA3Model[[peft.IA3Model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.IA3Model</name><anchor>peft.IA3Model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/ia3/model.py#L36</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **config** ([IA3Config](/docs/peft/v0.18.0.rc0/en/package_reference/ia3#peft.IA3Config)) -- The configuration of the (IA)^3 model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The (IA)^3 model.</retdesc></docstring>

Creates a Infused Adapter by Inhibiting and Amplifying Inner Activations ((IA)^3) model from a pretrained
transformers model. The method is described in detail in https://huggingface.co/papers/2205.05638







<ExampleCodeBlock anchor="peft.IA3Model.example">

Example:

```py
>>> from transformers import AutoModelForSeq2SeqLM, ia3Config
>>> from peft import IA3Model, IA3Config

>>> config = IA3Config(
...     peft_type="IA3",
...     task_type="SEQ_2_SEQ_LM",
...     target_modules=["k", "v", "w0"],
...     feedforward_modules=["w0"],
... )

>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> ia3_model = IA3Model(config, model)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** (`ia3Config`): The configuration of the (IA)^3 model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_weighted_adapter</name><anchor>peft.IA3Model.add_weighted_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/ia3/model.py#L266</source><parameters>[{"name": "adapters", "val": ": list[str]"}, {"name": "weights", "val": ": list[float]"}, {"name": "adapter_name", "val": ": str"}]</parameters><paramsdesc>- **adapters** (`list`) --
  List of adapter names to be merged.
- **weights** (`list`) --
  List of weights for each adapter.
- **adapter_name** (`str`) --
  Name of the new adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method adds a new adapter by merging the given adapters with the given weights.




</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/ia3.md" />

### PEFT types
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/peft_types.md

# PEFT types

[PeftType](/docs/peft/v0.18.0.rc0/en/package_reference/peft_types#peft.PeftType) includes the supported adapters in PEFT, and [TaskType](/docs/peft/v0.18.0.rc0/en/package_reference/peft_types#peft.TaskType) includes PEFT-supported tasks.

## PeftType[[peft.PeftType]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftType</name><anchor>peft.PeftType</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/peft_types.py#L19</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enum class for the different types of adapters in PEFT.

Supported PEFT types:
- PROMPT_TUNING
- MULTITASK_PROMPT_TUNING
- P_TUNING
- PREFIX_TUNING
- LORA
- ADALORA
- BOFT
- ADAPTION_PROMPT
- IA3
- LOHA
- LOKR
- OFT
- XLORA
- POLY
- LN_TUNING
- VERA
- FOURIERFT
- HRA
- BONE
- MISS
- RANDLORA
- SHIRA
- C3A
- ROAD
- WAVEFT
- OSF
- DELORA


</div>

## TaskType[[peft.TaskType]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.TaskType</name><anchor>peft.TaskType</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/peft_types.py#L85</source><parameters>[{"name": "value", "val": ""}, {"name": "names", "val": " = None"}, {"name": "module", "val": " = None"}, {"name": "qualname", "val": " = None"}, {"name": "type", "val": " = None"}, {"name": "start", "val": " = 1"}]</parameters></docstring>

Enum class for the different types of tasks supported by PEFT.

Overview of the supported task types:
- SEQ_CLS: Text classification.
- SEQ_2_SEQ_LM: Sequence-to-sequence language modeling.
- CAUSAL_LM: Causal language modeling.
- TOKEN_CLS: Token classification.
- QUESTION_ANS: Question answering.
- FEATURE_EXTRACTION: Feature extraction. Provides the hidden states which can be used as embeddings or features
  for downstream tasks.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/peft_types.md" />

### P-tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/p_tuning.md

# P-tuning

[P-tuning](https://hf.co/papers/2103.10385) adds trainable prompt embeddings to the input that is optimized by a prompt encoder to find a better prompt, eliminating the need to manually design prompts. The prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance.

The abstract from the paper is:

*While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs' performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.*.

## PromptEncoderConfig[[peft.PromptEncoderConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PromptEncoderConfig</name><anchor>peft.PromptEncoderConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/p_tuning/config.py#L29</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "encoder_reparameterization_type", "val": ": typing.Union[str, peft.tuners.p_tuning.config.PromptEncoderReparameterizationType] = <PromptEncoderReparameterizationType.MLP: 'MLP'>"}, {"name": "encoder_hidden_size", "val": ": int = None"}, {"name": "encoder_num_layers", "val": ": int = 2"}, {"name": "encoder_dropout", "val": ": float = 0.0"}]</parameters><paramsdesc>- **encoder_reparameterization_type** (Union[`PromptEncoderReparameterizationType`, `str`]) --
  The type of reparameterization to use.
- **encoder_hidden_size** (`int`) -- The hidden size of the prompt encoder.
- **encoder_num_layers** (`int`) -- The number of layers of the prompt encoder.
- **encoder_dropout** (`float`) -- The dropout probability of the prompt encoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [PromptEncoder](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoder).




</div>

## PromptEncoder[[peft.PromptEncoder]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PromptEncoder</name><anchor>peft.PromptEncoder</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/p_tuning/model.py#L24</source><parameters>[{"name": "config", "val": ""}]</parameters><paramsdesc>- **config** ([PromptEncoderConfig](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoderConfig)) -- The configuration of the prompt encoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.



<ExampleCodeBlock anchor="peft.PromptEncoder.example">

Example:

```py
>>> from peft import PromptEncoder, PromptEncoderConfig

>>> config = PromptEncoderConfig(
...     peft_type="P_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     encoder_reparameterization_type="MLP",
...     encoder_hidden_size=768,
... )

>>> prompt_encoder = PromptEncoder(config)
```

</ExampleCodeBlock>

**Attributes**:
- **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prompt encoder.
- **mlp_head** (`torch.nn.Sequential`) -- The MLP head of the prompt encoder if `inference_mode=False`.
- **lstm_head** (`torch.nn.LSTM`) -- The LSTM head of the prompt encoder if `inference_mode=False` and
`encoder_reparameterization_type="LSTM"`.
- **token_dim** (`int`) -- The hidden embedding dimension of the base transformer model.
- **input_size** (`int`) -- The input size of the prompt encoder.
- **output_size** (`int`) -- The output size of the prompt encoder.
- **hidden_size** (`int`) -- The hidden size of the prompt encoder.
- **total_virtual_tokens** (`int`): The total number of virtual tokens of the
prompt encoder.
- **encoder_type** (Union[`PromptEncoderReparameterizationType`, `str`]): The encoder type of the prompt
  encoder.


Input shape: (`batch_size`, `total_virtual_tokens`)

Output shape: (`batch_size`, `total_virtual_tokens`, `token_dim`)


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/p_tuning.md" />

### FourierFT: Discrete Fourier Transformation Fine-Tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/fourierft.md

# FourierFT: Discrete Fourier Transformation Fine-Tuning

[FourierFT](https://huggingface.co/papers/2405.03003) is a parameter-efficient fine-tuning technique that leverages Discrete Fourier Transform to compress the model's tunable weights. This method outperforms LoRA in the GLUE benchmark and common ViT classification tasks using much less parameters.

FourierFT currently has the following constraints:

- Only `nn.Linear` layers are supported.
- Quantized layers are not supported.

If these constraints don't work for your use case, consider other methods instead.

The abstract from the paper is:

> Low-rank adaptation (LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices A and B to represent the weight change, i.e., Delta W=BA. Despite LoRA's progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats Delta W as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover Delta W. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA's 33.5M.

## FourierFTConfig[[peft.FourierFTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.FourierFTConfig</name><anchor>peft.FourierFTConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/fourierft/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "n_frequency", "val": ": int = 1000"}, {"name": "scaling", "val": ": float = 150.0"}, {"name": "random_loc_seed", "val": ": Optional[int] = 777"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "n_frequency_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "init_weights", "val": ": bool = False"}]</parameters><paramsdesc>- **n_frequency** (`int`) --
  Num of learnable frequencies for the Discrete Fourier Transform. 'n_frequency' is an integer that is
  greater than 0 and less than or equal to d^2 (assuming the weight W has dimensions of d by d).
  Additionally, it is the number of trainable parameters required to update each delta W weight.
  'n_frequency' will affect the performance and efficiency for PEFT. Specifically, it has little impact on
  training speed, but higher values of it (typically) result in larger GPU memory costs and better accuracy.
  With the same `target_modules`, the number of parameters of LoRA is (2*d*r/n_frequency) times that of
  FourierFT. The following examples of settings regarding 'n_frequency' can be used as reference for users.
  For NLU tasks with the RoBERTa-large model, adopting 'n_frequency': 1000 can almost achieve similar results
  as 'r': 8 in LoRA. At this time, the number of parameters of LoRA is about 16 times that of FourierFT. For
  image classification tasks with Vit-large models, adopting 'n_frequency': 3000 can almost achieve similar
  results as 'r': 16 in LoRA, where the number of parameters of LoRA is about 11 times that of FourierFT.
- **scaling** (`float`) --
  The scaling value for the delta W matrix. This is an important hyperparameter used for scaling, similar to
  the 'lora_alpha' parameter in the LoRA method. 'scaling' can be determined during the hyperparameter search
  process. However, if users want to skip this process, one can refer to the settings in the following
  scenarios. This parameter can be set to 100.0 or 150.0 for both RoBERTa-base and RoBERTa-large models
  across all NLU (GLUE) tasks. This parameter can be set to 300.0 for both LLaMA family models for all
  instruction tuning. This parameter can be set to 300.0 for both ViT-base and ViT-large models across all
  image classification tasks.
- **random_loc_seed** (`int`) --
  Seed for the random location of the frequencies, i.e., the spectral entry matrix.
- **target_modules** (`Union[list[str],str]`) --
  List of module names or regex expression of the module names to replace with FourierFT. For example, ['q',
  'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$'. Only linear layers are supported.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out).
- **bias** (`str`) --
  Bias type for FourierFT. Can be 'none', 'all' or 'fourier_only'.
- **modules_to_save** (`list[str]`) --
  List of modules apart from FourierFT layers to be set as trainable and saved in the final checkpoint. For
  example, in Sequence Classification or Token Classification tasks, the final layer `classifier/score` are
  randomly initialized and as such need to be trainable and saved.
- **layers_to_transform** (`Union[list[int],int]`) --
  The layer indexes to transform, is this argument is specified, PEFT will transform only the layers indexes
  that are specified inside this list. If a single integer is passed, PEFT will transform only the layer at
  this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different to None and if the layer pattern is
  not in the common layers pattern. This should target the `nn.ModuleList` of the model, which is often
  called `'layers'` or `'h'`.
- **n_frequency_pattern** (`dict`) --
  The mapping from layer names or regexp expression to n_frequency which are different from the default
  specified. For example, `{model.decoder.layers.0.encoder_attn.k_proj: 1000`}.
- **init_weights** (`bool`) --
  The initialization of the Fourier weights. Set this to False (the default) if the spectrum are initialized
  to a standard normal distribution. Set this to True if the spectrum are initialized to zeros.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [FourierFTModel](/docs/peft/v0.18.0.rc0/en/package_reference/fourierft#peft.FourierFTModel).




</div>

## FourierFTModel[[peft.FourierFTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.FourierFTModel</name><anchor>peft.FourierFTModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/fourierft/model.py#L31</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([FourierFTConfig](/docs/peft/v0.18.0.rc0/en/package_reference/fourierft#peft.FourierFTConfig)) -- The configuration of the FourierFT model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The FourierFT model.</retdesc></docstring>

Creates FourierFT model from a pretrained transformers model.

The method is described in detail in https://huggingface.co/papers/2405.03003.







**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([FourierFTConfig](/docs/peft/v0.18.0.rc0/en/package_reference/fourierft#peft.FourierFTConfig)): The configuration of the Fourier model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/fourierft.md" />

### C3A: Parameter-Efficient Fine-Tuning via Circular Convolution
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/c3a.md

# C3A: Parameter-Efficient Fine-Tuning via Circular Convolution

[C3A](https://huggingface.co/papers/2407.19342) is a parameter-efficient fine-tuning technique that leverages Circular Convolution to achieve high rank adaptation within reasonable resource limits.

Note that you should use a much larger learning rate (LR) for C3A than for other methods. For example, a LR of 1e-1 for C3A is a good starting point. Besides, a much smaller weight decay should be used. You can refer to the `method_comparison` folder for more details.

For the `block_size`, it affects tunable parameters and performance. To start with, you can choose a $\mathrm{gcd}(d_1,d_2)$ near $\frac{\sqrt{d_1\times d_2}}{r}$, where $r$ is the rank for LoRA you would use for this task.

C3A currently has the following constraints:

- Only `nn.Linear` layers are supported.
- Quantized layers are not supported.
- The block size should be a common divisor of both the input and output sizes of target layers. 

If these constraints don't work for your use case, consider other methods instead.

The abstract from the paper is:

> Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices $\mathbf{A}$ and $\mathbf{B}$ to represent weight changes (i.e., $\Delta \mathbf{W} = \mathbf{B} \mathbf{A}$). This method reduces trainable parameters and mitigates heavy memory consumption associated with full delta matrices by sequentially multiplying $\mathbf{A}$ and $\mathbf{B}$ with the activation. Despite its success, the intrinsic low-rank characteristic may limit its performance. Although several variants have been proposed to address this issue, they often overlook the crucial computational and memory efficiency brought by LoRA. In this paper, we propose Circular Convolution Adaptation (C3A), which not only achieves high-rank adaptation with enhanced performance but also excels in both computational power and memory utilization. Extensive experiments demonstrate that C3A consistently outperforms LoRA and its variants across various fine-tuning tasks. 

## C3AConfig[[peft.C3AConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.C3AConfig</name><anchor>peft.C3AConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/c3a/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "block_size", "val": ": int = 256"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "block_size_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "init_weights", "val": ": Optional[Union[bool, Literal['gaussian', 'kaiming_uniform', 'xavier_uniform']]] = 'xavier_uniform'"}]</parameters><paramsdesc>- **block_size** (`int`) --
  block size for C3A, must be divisible by both the input size and the output size of the target layer. If
  you have no idea what block_size you should use, set it to the greatest common divisor of all input &
  output sizes of your target layers. Increasing this would result in less parameters.
- **target_modules** (`Union[list[str],str]`) -- The names of the modules to apply C3A to.
- **bias** (`str`) -- Bias type for C3A. Can be 'none', 'all' or 'c3a_only'. If 'all' or 'c3a_only', the
  corresponding biases will be updated during training. Be aware that this means that, even when disabling
  the adapters, the model will not produce the same output as the base model would have without adaptation.
- **modules_to_save** (`list[str]`) --list of modules apart from C3A layers to be set as trainable
  and saved in the final checkpoint.
- **layers_to_transform** (`Union[list[int],int]`) --
  The layer indexes to transform, if this argument is specified, it will apply C3A on the layer indexes that
  are specified in this list. If a single integer is passed, it will apply C3A on the layer at this index.
- **layers_pattern** (`str`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None` and if the layer
  pattern is not in the common layers pattern.
- **block_size_pattern** (`dict`) --
  The mapping from layer names or regexp expression to block_size which are different from the default
  specified. For example, `{"model.decoder.layers.0.encoder_attn.k_proj": 1280`}
- **init_weights** (`Union[bool, Literal["gaussian", "kaiming_uniform", "xavier_uniform"]]`) --
  Defaults to 'xavier_uniform'. Setting this to `False` also uses 'xavier_uniform'. To set the weights to
  zeros (thus making C3A a no-op), set the value to `True`.</paramsdesc><paramgroups>0</paramgroups></docstring>
This is the configuration class to store the configuration of a [C3AModel](/docs/peft/v0.18.0.rc0/en/package_reference/c3a#peft.C3AModel).




</div>

## C3AModel[[peft.C3AModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.C3AModel</name><anchor>peft.C3AModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/c3a/model.py#L29</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([C3AConfig](/docs/peft/v0.18.0.rc0/en/package_reference/c3a#peft.C3AConfig)) -- The configuration of the C3A model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The C3A model.</retdesc></docstring>

Creates C3A model from a pretrained transformers model.

The method is described in detail in https://huggingface.co/papers/2407.19342.







**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([C3AConfig](/docs/peft/v0.18.0.rc0/en/package_reference/c3a#peft.C3AConfig)): The configuration of the C3A model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/c3a.md" />

### Model merge[[peft.utils.merge_utils.prune]]
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/merge_utils.md

# Model merge[[peft.utils.merge_utils.prune]]

PEFT provides several internal utilities for [merging LoRA adapters](../developer_guides/model_merging) with the TIES and DARE methods.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.prune</name><anchor>peft.utils.merge_utils.prune</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L75</source><parameters>[{"name": "tensor", "val": ": Tensor"}, {"name": "density", "val": ": float"}, {"name": "method", "val": ": typing.Literal['magnitude', 'random']"}, {"name": "rescale", "val": ": bool = False"}]</parameters><paramsdesc>- **tensor** (`torch.Tensor`) --The tensor to prune.
- **density** (`float`) --The fraction of values to preserve. Should be in [0,1].
- **method** (`str`) --The method to use to prune. Should be one of ["magnitude", "random"].
- **rescale** (`bool`) --Whether to rescale the result to preserve the expected value of the original tensor.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The pruned tensor.</retdesc></docstring>

Prune the values of task tensors based on the `method`.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.calculate_majority_sign_mask</name><anchor>peft.utils.merge_utils.calculate_majority_sign_mask</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L103</source><parameters>[{"name": "tensor", "val": ": Tensor"}, {"name": "method", "val": ": typing.Literal['total', 'frequency'] = 'total'"}]</parameters><paramsdesc>- **tensor** (`torch.Tensor`) --The tensor to get the mask from.
- **method** (`str`) --The method to use to get the mask. Should be one of ["total", "frequency"].</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The majority sign mask.</retdesc></docstring>

Get the mask of the majority sign across the task tensors. Task tensors are stacked on dimension 0.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.disjoint_merge</name><anchor>peft.utils.merge_utils.disjoint_merge</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L128</source><parameters>[{"name": "task_tensors", "val": ": Tensor"}, {"name": "majority_sign_mask", "val": ": Tensor"}]</parameters><paramsdesc>- **task_tensors** (`torch.Tensor`) --The task tensors to merge.
- **majority_sign_mask** (`torch.Tensor`) --The mask of the majority sign across the task tensors.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The merged tensor.</retdesc></docstring>

Merge the task tensors using disjoint merge.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.task_arithmetic</name><anchor>peft.utils.merge_utils.task_arithmetic</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L144</source><parameters>[{"name": "task_tensors", "val": ": list"}, {"name": "weights", "val": ": Tensor"}]</parameters><paramsdesc>- **task_tensors(`List[torch.Tensor]`)** --The task tensors to merge.
- **weights** (`torch.Tensor`) --The weights of the task tensors.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The merged tensor.</retdesc></docstring>

Merge the task tensors using `task arithmetic`.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.ties</name><anchor>peft.utils.merge_utils.ties</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L185</source><parameters>[{"name": "task_tensors", "val": ": list"}, {"name": "weights", "val": ": Tensor"}, {"name": "density", "val": ": float"}, {"name": "majority_sign_method", "val": ": typing.Literal['total', 'frequency'] = 'total'"}]</parameters><paramsdesc>- **task_tensors(`List[torch.Tensor]`)** --The task tensors to merge.
- **weights** (`torch.Tensor`) --The weights of the task tensors.
- **density** (`float`) --The fraction of values to preserve. Should be in [0,1].
- **majority_sign_method** (`str`) --
  The method to use to get the majority sign mask. Should be one of ["total", "frequency"].</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The merged tensor.</retdesc></docstring>

Merge the task tensors using `ties`.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.dare_linear</name><anchor>peft.utils.merge_utils.dare_linear</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L217</source><parameters>[{"name": "task_tensors", "val": ": list"}, {"name": "weights", "val": ": Tensor"}, {"name": "density", "val": ": float"}]</parameters><paramsdesc>- **task_tensors(`List[torch.Tensor]`)** --The task tensors to merge.
- **weights** (`torch.Tensor`) --The weights of the task tensors.
- **density** (`float`) --The fraction of values to preserve. Should be in [0,1].</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The merged tensor.</retdesc></docstring>

Merge the task tensors using `dare linear`.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.merge_utils.dare_ties</name><anchor>peft.utils.merge_utils.dare_ties</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/merge_utils.py#L239</source><parameters>[{"name": "task_tensors", "val": ": list"}, {"name": "weights", "val": ": Tensor"}, {"name": "density", "val": ": float"}, {"name": "majority_sign_method", "val": ": typing.Literal['total', 'frequency'] = 'total'"}]</parameters><paramsdesc>- **task_tensors(`List[torch.Tensor]`)** --The task tensors to merge.
- **weights** (`torch.Tensor`) --The weights of the task tensors.
- **density** (`float`) --The fraction of values to preserve. Should be in [0,1].
- **majority_sign_method** (`str`) --
  The method to use to get the majority sign mask. Should be one of ["total", "frequency"].</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The merged tensor.</retdesc></docstring>

Merge the task tensors using `dare ties`.








</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/merge_utils.md" />

### Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation (HRA)
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/hra.md

# Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation (HRA)

[HRA](https://huggingface.co/papers/2405.17484) is a simple but effective adapter-based fine-tuning method by leveraging Householder reflections. This method harnesses the advantages of both strategies, reducing parameters and computation costs while penalizing the loss of pre-training knowledge. It consistently achieves better performance with fewer trainable parameters and outperforms state-of-the-art adapters across different models, including large language models (LLMs) and conditional image generators.


The abstract from the paper is:

> While following different technical routes, both low-rank and orthogonal adaptation techniques can efficiently adapt large-scale pre-training models in specific tasks or domains based on a small piece of trainable parameters. In this study, we bridge the gap between these two techniques, proposing a simple but effective adaptation method based on Householder reflections. Given a pre-trained model, our method fine-tunes its layers by multiplying each frozen weight matrix with an orthogonal matrix constructed by a chain of learnable Householder reflections (HRs). This HR-based orthogonal fine-tuning is equivalent to an adaptive low-rank adaptation. Moreover, we show that the orthogonality of the reflection planes corresponding to the HRs impacts the model capacity and regularity. The analysis motivates us to regularize the orthogonality of the HRs, leading to different implementations of the proposed Householder reflection adaptation (HRA) method. Compared with state-of-the-art methods, HRA achieves superior performance with fewer learnable parameters when adapting large language models and conditional image generators. The code is available at [peft](https://github.com/huggingface/peft/tree/main/src/peft/tuners/hra) and [HRA](https://github.com/DaShenZi721/HRA).

## HRAConfig[[peft.HRAConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.HRAConfig</name><anchor>peft.HRAConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/hra/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 8"}, {"name": "apply_GS", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  The rank of HRA across different layers. It is best to set 'r' to an even number; otherwise, the default
  initialization method will not work.
- **apply_GS** (`bool`) --
  Whether to apply Gram-Schmidt orthogonalization.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding
  the output layer. If this is not specified, modules will be chosen according to the model architecture. If
  the architecture is not known, an error will be raised -- in this case, you should specify the target
  modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (`bool`) --
  Whether to perform initialization of HRA weights.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **modules_to_save** (`List[str]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [HRAModel](/docs/peft/v0.18.0.rc0/en/package_reference/hra#peft.HRAModel).




</div>

## HRAModel[[peft.HRAModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.HRAModel</name><anchor>peft.HRAModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/hra/model.py#L24</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([HRAConfig](/docs/peft/v0.18.0.rc0/en/package_reference/hra#peft.HRAConfig)) -- The configuration of the HRA model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The HRA model.</retdesc></docstring>

Creates Householder reflection adaptation (HRA) model from a pretrained model. The method is described in
https://huggingface.co/papers/2405.17484







<ExampleCodeBlock anchor="peft.HRAModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import HRAModel, HRAConfig

>>> config_te = HRAConfig(
...     r=8,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     init_weights=True,
... )
>>> config_unet = HRAConfig(
...     r=8,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     init_weights=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = HRAModel(model.text_encoder, config_te, "default")
>>> model.unet = HRAModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([HRAConfig](/docs/peft/v0.18.0.rc0/en/package_reference/hra#peft.HRAConfig)): The configuration of the HRA model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/hra.md" />

### VeRA: Vector-based Random Matrix Adaptation
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/vera.md

# VeRA: Vector-based Random Matrix Adaptation

[VeRA](https://huggingface.co/papers/2310.11454) is a parameter-efficient fine-tuning technique that is similar to LoRA but requires even fewer extra parameters while promising similar or even better performance. As such, it is particularly useful when the parameter budget is very limited, e.g. when scaling to very large models. The reduction of the count of trainable parameters is achieved by sharing the same low-rank matrices across all layers, and only training two additional vectors per layer.

When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).

To handle different shapes of adapted layers, VeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.

VeRA currently has the following constraint:

- Only `nn.Linear` layers are supported.

The abstract from the paper is:

> Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models.

## VeRAConfig[[peft.VeraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.VeraConfig</name><anchor>peft.VeraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vera/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 256"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "projection_prng_key", "val": ": int = 0"}, {"name": "save_projection", "val": ": bool = True"}, {"name": "vera_dropout", "val": ": float = 0.0"}, {"name": "d_initial", "val": ": float = 0.1"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}]</parameters><paramsdesc>- **r** (`int`, *optional*, defaults to `256`) --
  VeRA parameter dimension ("rank"). Choose higher values than LoRA ranks here, since VeRA uses far fewer
  parameters than LoRA (see Table 1).
- **target_modules** (`Union[List[str], str]`) --
  The names of the modules to apply Vera to. Only linear layers are supported.
- **projection_prng_key** (`int`) --
  Vera PRNG init key. Used for initialising vera_A and vera_B for new models or when loading a checkpoint
  that did not include these projections. Defaults to `0`.
- **save_projection** (`bool`) --
  Whether to save the vera_A / vera_B projections in the state dict alongside per layer lambda_b / lambda_d
  weights. This will increase the size of the checkpoint, but guarantee that we can reload the checkpoint on
  all system configurations. Defaults to `True`.
- **vera_dropout** (`float`) --
  The dropout probability for Vera layers.
- **d_initial** (`float`, *optional*, defaults to `0.1`) --
  Initial init value for `vera_lambda_d` vector used when initializing the VeRA parameters. Small values
  (<=0.1) are recommended (see Table 6c in the paper).
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **bias** (`str`) --
  Bias type for Vera. Can be 'none', 'all' or 'vera_only'. If 'all' or 'vera_only', the corresponding biases
  will be updated during training. Be aware that this means that, even when disabling the adapters, the model
  will not produce the same output as the base model would have without adaptation.
- **modules_to_save** (`List[str]`) --
  List of modules apart from Vera layers to be set as trainable and saved in the final checkpoint.
- **init_weights** (`bool`) --
  Whether to initialize the weights of the Vera layers with their default initialization. Don't change this
  setting, except if you know exactly what you're doing.
- **layers_to_transform** (`Union[List[int],int]`) --
  The layer indexes to transform, if this argument is specified, it will apply the Vera transformations on
  the layer indexes that are specified in this list. If a single integer is passed, it will apply the Vera
  transformations on the layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [VeraModel](/docs/peft/v0.18.0.rc0/en/package_reference/vera#peft.VeraModel).

Paper: https://huggingface.co/papers/2310.11454.




</div>

## VeRAModel[[peft.VeraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.VeraModel</name><anchor>peft.VeraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vera/model.py#L67</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **config** ([VeraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/vera#peft.VeraConfig)) -- The configuration of the Vera model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The Vera model.</retdesc></docstring>

Creates Vector-based Random Matrix Adaptation (Vera) model from a pretrained transformers model.







<ExampleCodeBlock anchor="peft.VeraModel.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import VeraConfig, get_peft_model

>>> base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> config = VeraConfig(r=128)
>>> model = get_peft_model(base_model, config)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([VeraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/vera#peft.VeraConfig)): The configuration of the Vera model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/vera.md" />

### Functions for PEFT integration
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/functional.md

# Functions for PEFT integration

A collection of functions that could be useful for non-PeftModel models, e.g. transformers or diffusers integration

The functions provided here can be considered "public API" of PEFT and hence are safe to be used by packages that provide PEFT integrations.

## Cast the adapter weight dtypes[[peft.tuners.tuners_utils.cast_adapter_dtype]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.tuners_utils.cast_adapter_dtype</name><anchor>peft.tuners.tuners_utils.cast_adapter_dtype</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1993</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "adapter_name", "val": ": str"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}]</parameters><paramsdesc>- **adapter_name** (`str`) --
  The adapter name.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A helper method to cast the adapter weights to the correct dtype.

Currently, this only upcasts float16 and bfloat16 to float32.




</div>

## Delete the PEFT adapter from model[[peft.tuners.tuners_utils.delete_adapter]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.tuners_utils.delete_adapter</name><anchor>peft.tuners.tuners_utils.delete_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1954</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "adapter_name", "val": ": str"}, {"name": "prefix", "val": ": str"}, {"name": "layer_cls", "val": ": type[BaseTunerLayer] = <class 'peft.tuners.tuners_utils.BaseTunerLayer'>"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model from which the adapter should be deleted.
- **adapter_name** (str) --
  The name of the adapter to be deleted.
- **prefix** (str) --
  The prefix of the PEFT method, e.g. "lora_" for LoRA.
- **layer_cls** (type, optional) --
  The class of the adapter layer. Defaults to `BaseTunerLayer`.</paramsdesc><paramgroups>0</paramgroups><rettype>new_adapter (list[str] | None)</rettype><retdesc>The name of remaining adapter(s) after deletion, or `None` if there are no active adapters left. Use this
to set the new active adapter of the model if necessary.</retdesc></docstring>

Delete an existing PEFT adapter.

Note: This function does not delete the PEFT config on the model, if there is one. It will also not completely
purge the PEFT layers if the last PEFT adapter is deleted. For this, consider using `model.unload()` if using a
PEFT model instance, or just reloading the base model.








</div>

## Get the state dict of the PEFT adapter[[peft.get_peft_model_state_dict]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_peft_model_state_dict</name><anchor>peft.get_peft_model_state_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/save_and_load.py#L57</source><parameters>[{"name": "model", "val": ""}, {"name": "state_dict", "val": " = None"}, {"name": "adapter_name", "val": " = 'default'"}, {"name": "unwrap_compiled", "val": " = False"}, {"name": "save_embedding_layers", "val": " = 'auto'"}]</parameters><paramsdesc>- **model** ([PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) -- The Peft model. When using torch.nn.DistributedDataParallel, DeepSpeed or FSDP,
  the model should be the underlying model/unwrapped model (i.e. model.module).
- **state_dict** (`dict`, *optional*, defaults to `None`) --
  The state dict of the model. If not provided, the state dict of the passed model will be used.
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter whose state dict should be returned.
- **unwrap_compiled** (`bool`, *optional*, defaults to `False`) --
  Whether to unwrap the model if torch.compile was used.
- **save_embedding_layers** (`Union[bool, str]`, , *optional*, defaults to `auto`) --
  If `True`, save the embedding layers in addition to adapter weights. If `auto`, checks the common embedding
  layers `peft.utils.other.EMBEDDING_LAYER_NAMES` in config's `target_modules` when available. Based on it
  sets the boolean flag. This only works for 🤗 transformers models.</paramsdesc><paramgroups>0</paramgroups></docstring>

Get the state dict of the given adapter of the PEFT model.

This only includes the PEFT parameters, not the parameters of the base model. Thus the returned `state_dict` is
generally small compared to the full model size. To retrieve the full `state_dict`, just call `model.state_dict()`.

Note that the adapter name is removed from the `state_dict`, as this is just an arbitrary name that can be changed
when loading the adapter. So e.g. if the adapter name is `'default'` and the original key is
`'model.q_proj.lora_A.default.weight'`, the returned key will be `'model.q_proj.lora_A.weight'`. Use this function
in conjunction with [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict) to take care of the adapter name when loading weights.




</div>

## Inject a PEFT adapter into the model based on a PEFT config[[peft.inject_adapter_in_model]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.inject_adapter_in_model</name><anchor>peft.inject_adapter_in_model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mapping.py#L47</source><parameters>[{"name": "peft_config", "val": ": PeftConfig"}, {"name": "model", "val": ": torch.nn.Module"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **peft_config** (`PeftConfig`) --
  Configuration object containing the parameters of the PEFT model.
- **model** (`torch.nn.Module`) --
  The input model where the adapter will be injected.
- **adapter_name** (`str`, `optional`, defaults to `"default"`) --
  The name of the adapter to be injected, if not provided, the default adapter name is used ("default").
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.
- **state_dict** (`dict`, *optional*, defaults to `None`) --
  If a `state_dict` is passed here, the adapters will be injected based on the entries of the state_dict.
  This can be useful when the exact `target_modules` of the PEFT method is unknown, for instance because the
  checkpoint was created without meta data. Note that the values from the `state_dict` are not used, only the
  keys are used to determine the correct layers that should be adapted.</paramsdesc><paramgroups>0</paramgroups></docstring>

Create PEFT layers and inject them into the model in-place.

Currently the API does not support prompt learning methods and adaption prompt.

This function is similar to [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) but it does not return a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) instance. Instead, it returns
the original, mutated instance of the passed model.




</div>

## Set the active PEFT adapter(s) of the model[[peft.tuners.tuners_utils.set_adapter]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.tuners_utils.set_adapter</name><anchor>peft.tuners.tuners_utils.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L1918</source><parameters>[{"name": "model", "val": ""}, {"name": "adapter_name", "val": ": str | list[str]"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "layer_cls", "val": ": type[BaseTunerLayer] = <class 'peft.tuners.tuners_utils.BaseTunerLayer'>"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model on which the adapter(s) should be set.
- **adapter_name** (str, list[str]) --
  The name(s) of the adapter(s) to set as active
- **inference_mode** (bool, optional) --
  Whether the activated adapter should be frozen (i.e. `requires_grad=False`). Default is False.
- **layer_cls** (type, optional) --
  The class of the adapter layer. Defaults to `BaseTunerLayer`.</paramsdesc><paramgroups>0</paramgroups></docstring>
Set the active PEFT adapter(s) of the model.

Active adapters are those adapters that participate in the forward pass. Use this function if you want to switch
between multiple PEFT adapters.




</div>

## Set the `requires_grad` attribute of the specified adapters[[peft.tuners.tuners_utils.set_requires_grad]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.tuners_utils.set_requires_grad</name><anchor>peft.tuners.tuners_utils.set_requires_grad</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/tuners_utils.py#L2036</source><parameters>[{"name": "model", "val": ""}, {"name": "adapter_names", "val": ": str | Sequence[str]"}, {"name": "requires_grad", "val": ": bool = True"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model from which the adapter should be deleted.
- **adapter_name** (`str` or `Sequence[str]`) --
  The name of the adapter(s) whose gradients should be enabled/disabled.
- **requires_grad** (`bool`, *optional*) --
  Whether to enable (`True`, default) or disable (`False`).</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable or disable gradients on the given adapter(s).




</div>

## Load the weights of the PEFT state dict into the model[[peft.set_peft_model_state_dict]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.set_peft_model_state_dict</name><anchor>peft.set_peft_model_state_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/save_and_load.py#L405</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_model_state_dict", "val": ""}, {"name": "adapter_name", "val": " = 'default'"}, {"name": "ignore_mismatched_sizes", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters><paramsdesc>- **model** ([PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) --
  The Peft model.
- **peft_model_state_dict** (`dict`) --
  The state dict of the Peft model.
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter whose state dict should be set.
- **ignore_mismatched_sizes** (`bool`, *optional*, defaults to `False`) --
  Whether to ignore mismatched in the state dict.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  This argument must be `True` if the `model` was loaded with adapter weights on the meta device, e.g. after
  calling `inject_adapter_in_model` with `low_cpu_mem_usage=True`. Otherwise, leave it as `False`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the state dict of the PEFT model.

Given a PEFT `state_dict` (as returned by [get_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model_state_dict)), insert the weights into the model. The
model needs to have the PEFT adapters already in place (e.g. via [inject_adapter_in_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.inject_adapter_in_model)).

Setting the adapter weights also takes care of re-inserting the adapter name. This name may be a different name
than the one originally used to train the adapter.




</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/functional.md" />

### Hotswapping adapters
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/hotswap.md

# Hotswapping adapters

The idea of hotswapping an adapter is the following: We can already load multiple adapters, e.g. two LoRAs, at the same time. But sometimes, we want to load one LoRA and then replace its weights in-place with the LoRA weights of another adapter. This is now possible the `hotswap_adapter` function.

In general, this should be faster than deleting one adapter and loading the adapter in its place, which would be the how to achieve the same final outcome without hotswapping. Another advantage of hotswapping is that it prevents re-compilation in case the PEFT model is already compiled using `torch.compile`. This can save quite a lot of time.

## Example without `torch.compile`

```python
import torch
from transformers import AutoModelForCausalLM
from peft import PeftModel
from peft.utils.hotswap import hotswap_adapter

model_id = ...
inputs = ...
device = ...
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)

# load lora 0
model = PeftModel.from_pretrained(model, <path-adapter-0>)
with torch.inference_mode():
    output_adapter_0 = model(inputs)

# replace the "default" lora adapter with the new one
hotswap_adapter(model, <path-adapter-1>, adapter_name="default", torch_device=device)
with torch.inference_mode():
    output_adapter_1 = model(inputs).logits
```

## Example with `torch.compile`

```python
import torch
from transformers import AutoModelForCausalLM
from peft import PeftModel
from peft.utils.hotswap import hotswap_adapter, prepare_model_for_compiled_hotswap

model_id = ...
inputs = ...
device = ...
max_rank = ...  # maximum rank among all LoRA adapters that will be used
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)

# load lora 0
model = PeftModel.from_pretrained(model, <path-adapter-0>)
# Prepare the model to allow hotswapping even if ranks/scalings of 2nd adapter differ.
# You can skip this step if all ranks and scalings are identical.
prepare_model_for_compiled_hotswap(model, target_rank=max_rank)
model = torch.compile(model)
with torch.inference_mode():
    output_adapter_0 = model(inputs)

# replace the "default" lora adapter with the new one
hotswap_adapter(model, <path-adapter-1>, adapter_name="default", torch_device=device)
with torch.inference_mode():
    output_adapter_1 = model(inputs).logits
```

## Caveats[[peft.utils.hotswap.hotswap_adapter]]

Hotswapping works with transformers models and diffusers models. However, there are some caveats:

- Right now, only LoRA is properly supported.
- It only works for the same PEFT method, so no swapping LoRA and LoHa, for example.
- The adapter that is being swapped in must target the same layers as the previous adapter or a subset of those layers. It cannot target new layers. Therefore, if possible, start with the adapter that targets most layers.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.hotswap.hotswap_adapter</name><anchor>peft.utils.hotswap.hotswap_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/hotswap.py#L545</source><parameters>[{"name": "model", "val": ""}, {"name": "model_name_or_path", "val": ""}, {"name": "adapter_name", "val": ""}, {"name": "torch_device", "val": " = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([~PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) --
  The PEFT model with the loaded adapter.
- **model_name_or_path** (`str`) --
  The name or path of the model to load the new adapter from.
- **adapter_name** (`str`) --
  The name of the adapter to swap, e.g. `"default"`. The name will stay the same after swapping.
- **torch_device** -- (`str`, *optional*, defaults to None):
  The device to load the new adapter onto.
- ****kwargs** (`optional`) --
  Additional keyword arguments used for loading the config and weights.</paramsdesc><paramgroups>0</paramgroups></docstring>
Substitute old adapter data with new adapter data, keeping the rest the same.

As of now, only LoRA is supported.

This function is useful when you want to replace the loaded adapter with a new adapter. The adapter name will
remain the same, but the weights and other parameters will be swapped out.

If the adapters are incomptabile, e.g. targeting different layers or having different alpha values, an error will
be raised.

<ExampleCodeBlock anchor="peft.utils.hotswap.hotswap_adapter.example">

Example:

```py
>>> import torch
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModel
>>> from peft.utils.hotswap import hotswap_adapter

>>> model_id = ...
>>> inputs = ...
>>> device = ...
>>> model = AutoModelForCausalLM.from_pretrained(model_id).to(device)

>>> # load lora 0
>>> model = PeftModel.from_pretrained(model, "path-adapter-0")
>>> model = torch.compile(model)  # optionally compile the model
>>> with torch.inference_mode():
...     output_adapter_0 = model(inputs)

>>> # replace the "default" lora adapter with the new one
>>> hotswap_adapter(model, "path-adapter-1", adapter_name="default", torch_device=device)
>>> with torch.inference_mode():
...     output_adapter_1 = model(inputs).logits
```

</ExampleCodeBlock>




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.utils.hotswap.hotswap_adapter_from_state_dict</name><anchor>peft.utils.hotswap.hotswap_adapter_from_state_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/hotswap.py#L369</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "state_dict", "val": ": dict[str, torch.Tensor]"}, {"name": "adapter_name", "val": ": str"}, {"name": "config", "val": ": LoraConfig"}, {"name": "parameter_prefix", "val": ": str = 'lora_'"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model with the loaded adapter.
- **state_dict** (`dict[str, torch.Tensor]`) --
  The state dict of the new adapter, which needs to be compatible (targeting same modules etc.).
- **adapter_name** (`str`) --
  The name of the adapter that should be hot-swapped, e.g. `"default"`. The name will remain the same after
  swapping.
- **config** (`LoraConfig`) --
  The config of the LoRA adapter. This is used to determine the scaling and rank of the adapter.
- **parameter_prefix** (`str`, *optional*, defaults to `"lora_"`) --
  The prefix used to identify the adapter's keys in the state dict. For LoRA, this would be `"lora_"` (the
  default).</paramsdesc><paramgroups>0</paramgroups><raises>- ``RuntimeError`` -- 
  If the old and the new adapter are not compatible, a RuntimeError is raised.</raises><raisederrors>``RuntimeError``</raisederrors></docstring>

Swap out the adapter weights from the model with the weights from state_dict.

As of now, only LoRA is supported.

This is a low-level function that assumes that the adapters have been checked for compatibility and that the
state_dict has been correctly mapped to work with PEFT. For a high level function that performs this work for you,
use `hotswap_adapter` instead.








</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/hotswap.md" />

### BOFT
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/boft.md

# BOFT

[Orthogonal Butterfly (BOFT)](https://hf.co/papers/2311.06243) is a generic method designed for finetuning foundation models. It improves the parameter efficiency of the finetuning paradigm -- Orthogonal Finetuning (OFT), by taking inspiration from Cooley-Tukey fast Fourier transform, showing favorable results across finetuning different foundation models, including large vision transformers, large language models and text-to-image diffusion models.

The abstract from the paper is:

*Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream tasks is increasingly important. In this paper, we study a principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream task adaptation. Despite demonstrating good generalizability, OFT still uses a fairly large number of trainable parameters due to the high dimensionality of orthogonal matrices. To address this, we start by examining OFT from an information transmission perspective, and then identify a few key desiderata that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast Fourier transform algorithm enables efficient information transmission, we propose an efficient orthogonal parameterization using butterfly structures. We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a special case, BOFT introduces a generalized orthogonal finetuning framework. Finally, we conduct an extensive empirical study of adapting large vision transformers, large language models, and text-to-image diffusion models to various downstream tasks in vision and language*.

## BOFTConfig[[peft.BOFTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.BOFTConfig</name><anchor>peft.BOFTConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/boft/config.py#L28</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "boft_block_size", "val": ": int = 4"}, {"name": "boft_block_num", "val": ": int = 0"}, {"name": "boft_n_butterfly_factor", "val": ": int = 1"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "boft_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}]</parameters><paramsdesc>- **boft_block_size** (`int`) -- BOFT block size across different layers.
- **boft_block_num** (`int`) -- Number of BOFT blocks per injected layer.
- **boft_n_butterfly_factor** (`int`) -- Number of butterfly factors across different layers.
- **target_modules** (`Union[List[str],str]`) -- The names of the modules to apply the adapter to.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **boft_dropout** (`float`) --
  The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the
  dropout layer in LoRA.
- **fan_in_fan_out** (`bool`) -- Set this to True if the layer to replace stores weight like (fan_in, fan_out).
  For example, gpt-2 uses `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set
  to `True`.
- **bias** (`str`) -- Bias type for BOFT. Can be 'none', 'all' or 'boft_only'. If 'all' or 'boft_only', the
  corresponding biases will be updated during training. Be aware that this means that, even when disabling
  the adapters, the model will not produce the same output as the base model would have without adaptation.
- **modules_to_save** (`List[str]`) --List of modules apart from BOFT layers to be set as trainable
  and saved in the final checkpoint.
- **layers_to_transform** (`Union[List[int],int]`) --
  The layer indexes to transform, if this argument is specified, it will apply the BOFT transformations on
  the layer indexes that are specified in this list. If a single integer is passed, it will apply the BOFT
  transformations on the layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None` and if the layer
  pattern is not in the common layers pattern. This should target the `nn.ModuleList` of the model, which is
  often called `'layers'` or `'h'`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [BOFTModel](/docs/peft/v0.18.0.rc0/en/package_reference/boft#peft.BOFTModel).




</div>

## BOFTModel[[peft.BOFTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.BOFTModel</name><anchor>peft.BOFTModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/boft/model.py#L31</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([*transformers.PreTrainedModel*]) -- The model to be adapted.
- **config** ([*BOFTConfig*]) -- The configuration of the BOFT model.
- **adapter_name** (*str*) -- The name of the adapter, defaults to *"default"*.
- **low_cpu_mem_usage** (*bool*, *optional*, defaults to *False*) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>*torch.nn.Module*</rettype><retdesc>The BOFT model.</retdesc></docstring>

Creates BOFT and OFT model from a pretrained transformers model. Paper: https://huggingface.co/papers/2311.06243
https://huggingface.co/papers/2306.07280







<ExampleCodeBlock anchor="peft.BOFTModel.example">

Example:

```python
>>> import transformers >>> from transformers import AutoModelForSeq2SeqLM, BOFTConfig >>> from peft import
BOFTConfig, get_peft_model

>>> config = BOFTConfig( ... boft_block_size=8, ... boft_n_butterfly_factor=1, ... target_modules=["query",
"value", "key", "output.dense", "mlp.fc1", "mlp.fc2"], ... boft_dropout=0.1, ... bias="boft_only", ...
modules_to_save=["classifier"], ... )

>>> model = transformers.Dinov2ForImageClassification.from_pretrained( ... "facebook/dinov2-large", ...
num_labels=100, ... ) >>> boft_model = get_peft_model(model, config)
```

</ExampleCodeBlock>

    **Attributes**:
- **model** ([*transformers.PreTrainedModel*]) -- The model to be adapted.
- **peft_config** ([*BOFTConfig*]): The configuration of the BOFT model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/boft.md" />

### Multitask prompt tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/multitask_prompt_tuning.md

# Multitask prompt tuning

[Multitask prompt tuning](https://huggingface.co/papers/2303.02861)  decomposes the soft prompts of each task into a single learned transferable prompt instead of a separate prompt for each task. The single learned prompt can be adapted for each task by multiplicative low rank updates.

The abstract from the paper is:

*Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters*.

## MultitaskPromptTuningConfig[[peft.MultitaskPromptTuningConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.MultitaskPromptTuningConfig</name><anchor>peft.MultitaskPromptTuningConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/multitask_prompt_tuning/config.py#L37</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "prompt_tuning_init", "val": ": typing.Union[peft.tuners.multitask_prompt_tuning.config.MultitaskPromptTuningInit, str] = <MultitaskPromptTuningInit.RANDOM: 'RANDOM'>"}, {"name": "prompt_tuning_init_text", "val": ": typing.Optional[str] = None"}, {"name": "tokenizer_name_or_path", "val": ": typing.Optional[str] = None"}, {"name": "tokenizer_kwargs", "val": ": typing.Optional[dict] = None"}, {"name": "prompt_tuning_init_state_dict_path", "val": ": typing.Optional[str] = None"}, {"name": "prompt_tuning_init_task", "val": ": typing.Optional[int] = 0"}, {"name": "num_ranks", "val": ": typing.Optional[int] = 1"}, {"name": "num_tasks", "val": ": typing.Optional[int] = 1"}]</parameters></docstring>


</div>

## MultitaskPromptEmbedding[[peft.tuners.MultitaskPromptEmbedding]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.MultitaskPromptEmbedding</name><anchor>peft.tuners.MultitaskPromptEmbedding</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/multitask_prompt_tuning/model.py#L28</source><parameters>[{"name": "config", "val": ": MultitaskPromptTuningConfig"}, {"name": "word_embeddings", "val": ""}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/multitask_prompt_tuning.md" />

### Helper methods
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/helpers.md

# Helper methods

A collection of helper functions for PEFT.

## Checking if a model is a PEFT model[[peft.helpers.check_if_peft_model]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.helpers.check_if_peft_model</name><anchor>peft.helpers.check_if_peft_model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/helpers.py#L135</source><parameters>[{"name": "model_name_or_path", "val": ": str"}]</parameters><paramsdesc>- **model_name_or_path** (`str`) --
  Model id to check, can be local or on the Hugging Face Hub.</paramsdesc><paramgroups>0</paramgroups><rettype>`bool`</rettype><retdesc>True if the model is a PEFT model, False otherwise.</retdesc></docstring>

Check if the model is a PEFT model.








</div>

## Temporarily Rescaling Adapter Scale in LoraLayer Modules[[peft.helpers.rescale_adapter_scale]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.helpers.rescale_adapter_scale</name><anchor>peft.helpers.rescale_adapter_scale</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/helpers.py#L156</source><parameters>[{"name": "model", "val": ""}, {"name": "multiplier", "val": ""}]</parameters><paramsdesc>- **model** -- The model containing `LoraLayer` modules whose scaling is to be adjusted.
- **multiplier** (float or int) --
  The multiplier that rescales the `scaling` attribute. Must be of type float or int.</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- If the model does not contain any `LoraLayer`
  instances, indicating that the model does not support scaling.</raises><raisederrors>``ValueError``</raisederrors></docstring>

Context manager to temporarily rescale the scaling of the LoRA adapter in a model.

The original scaling values are restored when the context manager exits. This context manager works with the
transformers and diffusers models that have directly loaded LoRA adapters.

For LoRA, applying this context manager with multiplier in [0, 1] is strictly equivalent to applying
[wise-ft](https://huggingface.co/papers/2109.01903) (see [#1940](https://github.com/huggingface/peft/issues/1940)
for details). It can improve the performances of the model if there is a distribution shiftbetween the training
data used for fine-tuning, and the test data used during inference.

Warning: It has been reported that when using Apple's MPS backend for PyTorch, it is necessary to add a short sleep
time after exiting the context before the scales are fully restored.







<ExampleCodeBlock anchor="peft.helpers.rescale_adapter_scale.example">

Example:

```python
>>> model = ModelWithLoraLayer()
>>> multiplier = 0.5
>>> with rescale_adapter_scale(model, multiplier):
...     outputs = model(**inputs)  # Perform operations with the scaled model
>>> outputs = model(**inputs)  # The original scaling values are restored here
```

</ExampleCodeBlock>


</div>

## Context manager to disable input dtype casting in the `forward` method of LoRA layers[[peft.helpers.disable_input_dtype_casting]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.helpers.disable_input_dtype_casting</name><anchor>peft.helpers.disable_input_dtype_casting</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/helpers.py#L217</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "active", "val": ": bool = True"}]</parameters><paramsdesc>- **model** (nn.Module) --
  The model containing PEFT modules whose input dtype casting is to be adjusted.
- **active** (bool) --
  Whether the context manager is active (default) or inactive.</paramsdesc><paramgroups>0</paramgroups></docstring>

Context manager disables input dtype casting to the dtype of the weight.




</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/helpers.md" />

### MiSS
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/miss.md

# MiSS

MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing([MiSS](https://huggingface.co/papers/2409.15371)) is a novel PEFT method that adopts a low-rank structure, requires only a single trainable matrix, and introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.

The abstract from the paper is:

*Parameter-Efficient Fine-Tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), effectively reduce the number of trainable parameters in Large Language Models (LLMs). However, as model scales continue to grow, the demand for computational resources remains a significant challenge. Existing LoRA variants often struggle to strike an optimal balance between adaptability (model performance and convergence speed) and efficiency (computational overhead, memory usage, and initialization time). This paper introduces MiSS(Matrix Shard Sharing ), a novel PEFT approach that addresses this trade-off through a simple shard-sharing mechanism. MiSS leverages the insight that a low-rank adaptation can be achieved by decomposing the weight matrix into multiple fragment matrices and utilizing a shared, trainable common fragment. This method constructs the low-rank update matrix through the replication of these shared, partitioned shards. We also propose a hardware-efficient and broadly applicable implementation for MiSS. Extensive experiments conducted on a range of tasks, alongside a systematic analysis of computational performance, demonstrate MiSS's superiority. The results show that MiSS significantly outperforms standard LoRA and its prominent variants in both model performance metrics and computational efficiency, including initialization speed and training throughput. By effectively balancing expressive power and resource utilization, MiSS offers a compelling solution for efficiently adapting large-scale models*.


## MissConfig[[peft.MissConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.MissConfig</name><anchor>peft.MissConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/miss/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 64"}, {"name": "miss_dropout", "val": ": float = 0.0"}, {"name": "mini_r", "val": ": int = 1"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool | Literal['bat', 'mini'] = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[str] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  The rank of MiSS across different layers. It is best to set 'r' to an even number; otherwise, the default
  initialization method will not work. The rank of MiSS corresponds to a low-rank decomposition along the
  in_features dimension.
- **miss_dropout** (`float`) --
  The dropout probability for MiSS layers.
- **mini_r** (`int`) --
  The rank of MiSS corresponds to a low-rank decomposition along the out_features dimension. When you set
  `init_weights=mini`, you need to set `mini_r`. Please make sure that `out_features` is divisible by
  `mini_r`.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding
  the output layer. If this is not specified, modules will be chosen according to the model architecture. If
  the architecture is not known, an error will be raised -- in this case, you should specify the target
  modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (bool | Literal["bat", "mini"]) --
  Different initializations correspond to different MiSS variants. By default(balance), the most efficient
  and general method in MiSS will be used. 'bat': In this mode, you can enable nonlinear updates across
  different shards. 'mini': In this mode, you can set a smaller rank to use fewer trainable parameters, but
  it is recommended to keep `out_features % mini_r == 0`.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`str`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`.
- **modules_to_save** (`List[str]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a `MiSSModel`.




</div>

## MissModel[[peft.MissModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.MissModel</name><anchor>peft.MissModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/miss/model.py#L24</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([MissConfig](/docs/peft/v0.18.0.rc0/en/package_reference/miss#peft.MissConfig)) -- The configuration of the MiSS model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The MiSS model.</retdesc></docstring>

Creates Householder reflection adaptation (MiSS) model from a pretrained model. The method is described in
https://huggingface.co/papers/2409.15371







<ExampleCodeBlock anchor="peft.MissModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import MissModel, MissConfig

>>> config_te = MissConfig(
...     r=8,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     init_weights=True,
... )
>>> config_unet = MissConfig(
...     r=8,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     init_weights=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = MissModel(model.text_encoder, config_te, "default")
>>> model.unet = MissModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([MissConfig](/docs/peft/v0.18.0.rc0/en/package_reference/miss#peft.MissConfig)): The configuration of the MiSS model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/miss.md" />

### Prompt tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/prompt_tuning.md

# Prompt tuning

[Prompt tuning](https://hf.co/papers/2104.08691) adds task-specific prompts to the input, and these prompt parameters are updated independently of the pretrained model parameters which are frozen.

The abstract from the paper is:

*In this work, we explore "prompt tuning", a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's "few-shot" learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning*.

## PromptTuningConfig[[peft.PromptTuningConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PromptTuningConfig</name><anchor>peft.PromptTuningConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/prompt_tuning/config.py#L30</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "prompt_tuning_init", "val": ": typing.Union[peft.tuners.prompt_tuning.config.PromptTuningInit, str] = <PromptTuningInit.RANDOM: 'RANDOM'>"}, {"name": "prompt_tuning_init_text", "val": ": typing.Optional[str] = None"}, {"name": "tokenizer_name_or_path", "val": ": typing.Optional[str] = None"}, {"name": "tokenizer_kwargs", "val": ": typing.Optional[dict] = None"}]</parameters><paramsdesc>- **prompt_tuning_init** (Union[`PromptTuningInit`, `str`]) --
  The initialization of the prompt embedding. `TEXT` will initialize with your text. `SAMPLE_VOCAB` will
  initialize with randomly sampled tokens from the model's vocabulary. `RANDOM` will initialize with randomly
  sampled continuous, soft tokens (warning: sampled soft tokens may fall outside of embedding manifold)
- **prompt_tuning_init_text** (`str`, *optional*) --
  The text to initialize the prompt embedding. Only used if `prompt_tuning_init` is `TEXT`.
- **tokenizer_name_or_path** (`str`, *optional*) --
  The name or path of the tokenizer. Only used if `prompt_tuning_init` is `TEXT`.
- **tokenizer_kwargs** (`dict`, *optional*) --
  The keyword arguments to pass to `AutoTokenizer.from_pretrained`. Only used if `prompt_tuning_init` is
  `TEXT`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [PromptEmbedding](/docs/peft/v0.18.0.rc0/en/package_reference/prompt_tuning#peft.PromptEmbedding).




</div>

## PromptEmbedding[[peft.PromptEmbedding]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PromptEmbedding</name><anchor>peft.PromptEmbedding</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/prompt_tuning/model.py#L24</source><parameters>[{"name": "config", "val": ""}, {"name": "word_embeddings", "val": ""}]</parameters><paramsdesc>- **config** ([PromptTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/prompt_tuning#peft.PromptTuningConfig)) -- The configuration of the prompt embedding.
- **word_embeddings** (`torch.nn.Module`) -- The word embeddings of the base transformer model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The model to encode virtual tokens into prompt embeddings.



**Attributes**:
- **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prompt embedding.

<ExampleCodeBlock anchor="peft.PromptEmbedding.example">

Example:

```py
>>> from peft import PromptEmbedding, PromptTuningConfig

>>> config = PromptTuningConfig(
...     peft_type="PROMPT_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     prompt_tuning_init="TEXT",
...     prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
...     tokenizer_name_or_path="t5-base",
... )

>>> # t5_model.shared is the word embeddings of the base model
>>> prompt_embedding = PromptEmbedding(config, t5_model.shared)
```

</ExampleCodeBlock>

Input Shape: (`batch_size`, `total_virtual_tokens`)

Output Shape: (`batch_size`, `total_virtual_tokens`, `token_dim`)


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/prompt_tuning.md" />

### LoHa
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/loha.md

# LoHa

Low-Rank Hadamard Product ([LoHa](https://huggingface.co/papers/2108.06098)), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.

The abstract from the paper is:

*In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters*.

## LoHaConfig[[peft.LoHaConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoHaConfig</name><anchor>peft.LoHaConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/loha/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "rank_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "alpha_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "r", "val": ": int = 8"}, {"name": "alpha", "val": ": int = 8"}, {"name": "rank_dropout", "val": ": float = 0.0"}, {"name": "module_dropout", "val": ": float = 0.0"}, {"name": "use_effective_conv2d", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  LoHa rank.
- **alpha** (`int`) --
  The alpha parameter for LoHa scaling.
- **rank_dropout** (`float`) --
  The dropout probability for rank dimension during training.
- **module_dropout** (`float`) --
  The dropout probability for disabling LoHa modules during training.
- **use_effective_conv2d** (`bool`) --
  Use parameter effective decomposition for Conv2d (and Conv1d) with ksize > 1 ("Proposition 3" from FedPara
  paper).
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen,
  excluding the output layer. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (`bool`) --
  Whether to perform initialization of adapter weights. This defaults to `True`, passing `False` is
  discouraged.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **rank_pattern** (`dict`) --
  The mapping from layer names or regexp expression to ranks which are different from the default rank
  specified by `r`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **alpha_pattern** (`dict`) --
  The mapping from layer names or regexp expression to alphas which are different from the default alpha
  specified by `alpha`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **modules_to_save** (`Optional[List[str]]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [LoHaModel](/docs/peft/v0.18.0.rc0/en/package_reference/loha#peft.LoHaModel).




</div>

## LoHaModel[[peft.LoHaModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoHaModel</name><anchor>peft.LoHaModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/loha/model.py#L27</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([LoHaConfig](/docs/peft/v0.18.0.rc0/en/package_reference/loha#peft.LoHaConfig)) -- The configuration of the LoHa model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The LoHa model.</retdesc></docstring>

Creates Low-Rank Hadamard Product model from a pretrained model. The method is partially described in
https://huggingface.co/papers/2108.06098 Current implementation heavily borrows from
https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/loha.py







<ExampleCodeBlock anchor="peft.LoHaModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import LoHaModel, LoHaConfig

>>> config_te = LoHaConfig(
...     r=8,
...     lora_alpha=32,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     rank_dropout=0.0,
...     module_dropout=0.0,
...     init_weights=True,
... )
>>> config_unet = LoHaConfig(
...     r=8,
...     lora_alpha=32,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     rank_dropout=0.0,
...     module_dropout=0.0,
...     init_weights=True,
...     use_effective_conv2d=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = LoHaModel(model.text_encoder, config_te, "default")
>>> model.unet = LoHaModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([LoHaConfig](/docs/peft/v0.18.0.rc0/en/package_reference/loha#peft.LoHaConfig)): The configuration of the LoHa model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/loha.md" />

### WaveFT: Wavelet Fine-Tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/waveft.md

# WaveFT: Wavelet Fine-Tuning

[WaveFT](https://arxiv.org/abs/2505.12532) is a novel parameter-efficient fine-tuning (PEFT) method that introduces sparse updates in the **wavelet domain** of residual matrices. Unlike LoRA, which is constrained by discrete low-rank choices, WaveFT enables fine-grained control over the number of trainable parameters by directly learning a sparse set of coefficients in the transformed space. These coefficients are then mapped back to the weight domain via the Inverse Discrete Wavelet Transform (IDWT), producing high-rank updates without incurring inference overhead.

WaveFT currently has the following constraint:

- Only `nn.Linear` layers are supported.

The abstract from the paper is:

>Efficiently adapting large foundation models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, potentially far fewer than LoRA’s minimum—ideal for extreme parameter-efficient scenarios. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms LoRA and other PEFT methods, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.

## WaveFTConfig[[peft.WaveFTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.WaveFTConfig</name><anchor>peft.WaveFTConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/waveft/config.py#L27</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "n_frequency", "val": ": int = 2592"}, {"name": "scaling", "val": ": float = 25.0"}, {"name": "wavelet_family", "val": ": str = 'db1'"}, {"name": "use_idwt", "val": ": bool = True"}, {"name": "random_loc_seed", "val": ": int = 777"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "n_frequency_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "proportional_parameters", "val": ": bool = False"}, {"name": "init_weights", "val": ": bool = True"}]</parameters><paramsdesc>- **n_frequency** (`int`) --
  Number of learnable wavelet coefficients for the Discrete Wavelet Transform (DWT). 'n_frequency' is an
  integer that is greater than 0 and less than or equal to the total number of elements in the original
  weight matrix (d_out * d_in). This parameter directly controls the number of trainable parameters for each
  adapted layer. A higher 'n_frequency' generally leads to better performance but also increases GPU memory
  usage, with a minor impact on training speed.
- **scaling** (`float`) --
  The scaling factor applied to the reconstructed delta W matrix. This is a crucial hyperparameter, analogous
  to `lora_alpha` in LoRA. It can be tuned during hyperparameter search. Our default value for SDXL
  personalization is 25.
- **wavelet_family** (`str`) --
  The wavelet family (e.g., 'db1', 'sym2', 'coif1') to use for the DWT and Inverse DWT (IDWT). Defaults to
  'db1' (Haar wavelet). Different wavelet families have varying filter lengths which affect the training time
  substantially
- **use_idwt** (`bool`) --
  Set to False for efficient adaptation. Whether to use the Inverse Discrete Wavelet Transform (IDWT) to
  reconstruct the delta weights from the learned wavelet coefficients. If `True` (default), the IDWT is
  applied. If `False`, the learned coefficients are directly used to form a sparse delta weight matrix, which
  is faster but performs worse for the SDXL personalization task.
- **random_loc_seed** (`int`) --
  Seed for determining the random locations of the `n_frequency` learnable wavelet coefficients within the
  full wavelet coefficient matrix.
- **target_modules** (`Union[list[str],str]`) --
  List of module names or a regex expression identifying the modules to be adapted with WaveFT. For example,
  `['q_proj', 'v_proj']` or `'.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$'`. Currently, only linear
  layers (`torch.nn.Linear`) are supported.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  List of module names or a regex expression for modules to exclude from WaveFT adaptation.
- **fan_in_fan_out** (`bool`) --
  Set to `True` if the weights of the layer to be replaced are stored in `(fan_in, fan_out)` format. Default
  is `False`.
- **bias** (`str`) --
  Bias type for WaveFT. Can be 'none', 'all', or 'waveft_only'. ('fourier_only' was likely a typo and has
  been corrected to 'waveft_only' if it implies bias only on adapted parameters) If 'waveft_only', biases are
  added only to the WaveFT components. If 'all', biases are added to both base and WaveFT components. If
  'none', no new biases are added.
- **modules_to_save** (`list[str]`) --
  List of modules, in addition to WaveFT layers, that should be marked as trainable and saved in the final
  checkpoint. Useful for layers like classifiers in sequence or token classification tasks that are randomly
  initialized and need training.
- **layers_to_transform** (`Union[list[int],int]`) --
  Specific layer indices to transform. If provided, PEFT will only adapt layers at these indices. If a single
  integer is given, only that layer is transformed.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  Pattern for layer names, used if `layers_to_transform` is specified and the layer pattern is not standard
  (e.g., not 'layers' or 'h'). This should target the `nn.ModuleList` attribute in the model.
- **n_frequency_pattern** (`dict`) --
  A dictionary mapping layer names (or regex) to specific `n_frequency` values, overriding the global
  `n_frequency`. Example: `{"model.decoder.layers.0.encoder_attn.k_proj": 1000}`.
- **init_weights** (`bool`) --
  Initialization strategy for the learnable wavelet coefficients (spectrum). If `True` (default),
  coefficients are initialized to zeros. If `False`, coefficients are initialized from a standard normal
  distribution scaled by a small factor.
- **proportional_parameters** (`bool`) --
  If `True`, `n_frequency` is allocated proportionally to each layer's `input_dim * output_dim`. Default is
  `False`. Note: This option is included for experimental thoroughness to allow researchers to reproduce
  paper results, rather than for practical utility, as no beneficial scenarios have been identified.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [WaveFTModel](/docs/peft/v0.18.0.rc0/en/package_reference/waveft#peft.WaveFTModel). It is used to define the
parameters for Wavelet-based Fine-Tuning (WaveFT), an approach that leverages the sparsity of wavelet transforms
for parameter-efficient fine-tuning of pretrained models.




</div>

## WaveFTModel[[peft.WaveFTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.WaveFTModel</name><anchor>peft.WaveFTModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/waveft/model.py#L30</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/waveft.md" />

### AutoPeftModels
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/auto_class.md

# AutoPeftModels

The `AutoPeftModel` classes loads the appropriate PEFT model for the task type by automatically inferring it from the configuration file. They are designed to quickly and easily load a PEFT model in a single line of code without having to worry about which exact model class you need or manually loading a [PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig).

## AutoPeftModel[[peft.AutoPeftModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModel</name><anchor>peft.AutoPeftModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L152</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>peft.AutoPeftModel.from_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L67</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ""}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "is_trainable", "val": ": bool = False"}, {"name": "config", "val": ": Optional[PeftConfig] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

A wrapper around all the preprocessing steps a user needs to perform in order to load a PEFT model. The kwargs
are passed along to `PeftConfig` that automatically takes care of filtering the kwargs of the Hub methods and
the config object init.


</div></div>

## AutoPeftModelForCausalLM[[peft.AutoPeftModelForCausalLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForCausalLM</name><anchor>peft.AutoPeftModelForCausalLM</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L157</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## AutoPeftModelForSeq2SeqLM[[peft.AutoPeftModelForSeq2SeqLM]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForSeq2SeqLM</name><anchor>peft.AutoPeftModelForSeq2SeqLM</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L162</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## AutoPeftModelForSequenceClassification[[peft.AutoPeftModelForSequenceClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForSequenceClassification</name><anchor>peft.AutoPeftModelForSequenceClassification</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L167</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## AutoPeftModelForTokenClassification[[peft.AutoPeftModelForTokenClassification]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForTokenClassification</name><anchor>peft.AutoPeftModelForTokenClassification</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L172</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## AutoPeftModelForQuestionAnswering[[peft.AutoPeftModelForQuestionAnswering]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForQuestionAnswering</name><anchor>peft.AutoPeftModelForQuestionAnswering</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L177</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## AutoPeftModelForFeatureExtraction[[peft.AutoPeftModelForFeatureExtraction]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AutoPeftModelForFeatureExtraction</name><anchor>peft.AutoPeftModelForFeatureExtraction</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/auto.py#L182</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/auto_class.md" />

### Prefix tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/prefix_tuning.md

# Prefix tuning

[Prefix tuning](https://hf.co/papers/2101.00190) prefixes a series of task-specific vectors to the input sequence that can be learned while keeping the pretrained model frozen. The prefix parameters are inserted in all of the model layers.

The abstract from the paper is:

*Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training*.

## PrefixTuningConfig[[peft.PrefixTuningConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PrefixTuningConfig</name><anchor>peft.PrefixTuningConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/prefix_tuning/config.py#L22</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "encoder_hidden_size", "val": ": int = None"}, {"name": "prefix_projection", "val": ": bool = False"}]</parameters><paramsdesc>- **encoder_hidden_size** (`int`) -- The hidden size of the prompt encoder.
- **prefix_projection** (`bool`) -- Whether to project the prefix embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [PrefixEncoder](/docs/peft/v0.18.0.rc0/en/package_reference/prefix_tuning#peft.PrefixEncoder).




</div>

## PrefixEncoder[[peft.PrefixEncoder]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PrefixEncoder</name><anchor>peft.PrefixEncoder</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/prefix_tuning/model.py#L20</source><parameters>[{"name": "config", "val": ""}]</parameters><paramsdesc>- **config** ([PrefixTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/prefix_tuning#peft.PrefixTuningConfig)) -- The configuration of the prefix encoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

The `torch.nn` model to encode the prefix.



<ExampleCodeBlock anchor="peft.PrefixEncoder.example">

Example:

```py
>>> from peft import PrefixEncoder, PrefixTuningConfig

>>> config = PrefixTuningConfig(
...     peft_type="PREFIX_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     encoder_hidden_size=768,
... )
>>> prefix_encoder = PrefixEncoder(config)
```

</ExampleCodeBlock>

**Attributes**:
- **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prefix encoder.
- **transform** (`torch.nn.Sequential`) -- The two-layer MLP to transform the prefix embeddings if
  `prefix_projection` is `True`.
- **prefix_projection** (`bool`) -- Whether to project the prefix embeddings.

Input shape: (`batch_size`, `num_virtual_tokens`)

Output shape: (`batch_size`, `num_virtual_tokens`, `2*layers*hidden`)


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/prefix_tuning.md" />

### DeLoRA: Decoupled Low-rank Adaptation
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/delora.md

# DeLoRA: Decoupled Low-rank Adaptation
[DeLoRA](https://huggingface.co/papers/2503.18225) is a parameter-efficient fine-tuning technique that implicitly maintains a Frobenius boundary with respect to the pretrained weights by normalizing and scaling learnable low-rank matrices. This effectively decouples the learning of directions (BA term) and magnitude (boundary term) of the weight updates, avoiding catastrophic shifts in the adapted weights and enhancing robustness to hyperparameter choices.

Note:
- use a learning rate 10-100x larger than for standard LoRA variants (typical values from 1e-3/1e-2/..)
- ensure the initial boundary parameter lambda is not too small (typical values around 10/15/..). Setting different lambdas to different layers is possible

DeLoRA currently has the following constraints:
- Only nn.Linear layers are supported.
- Quantized layers are not supported.

If these constraints don't work for your use case, consider other methods instead.

The abstract from the paper is:

> Parameter-Efficient FineTuning (PEFT) methods have recently gained significant popularity thanks to the widespread availability of large-scale pretrained models. These methods allow for quick adaptation to downstream tasks with minimal computational cost. However, popular finetuning methods such as LoRA exhibit limited robustness when it comes to hyperparameter choices or extended training regimes, preventing optimal out-of-the-box performance. In contrast, bounded approaches, such as ETHER, provide greater robustness but are limited to extremely low-rank adaptations and fixed-strength transformations, reducing their adaptation expressive power. In this work, we propose Decoupled Low-rank Adaptation (DeLoRA), a novel finetuning method that normalizes and scales learnable low-rank matrices. By bounding the distance of the transformation, DeLoRA effectively decouples the angular learning from the adaptation strength, enhancing robustness without compromising performance. Through evaluations on subject-driven image generation, natural language understanding, and instruction tuning, we show that DeLoRA matches or surpasses performance of competing PEFT methods, while exhibiting stronger robustness. 

## DeloraConfig[[peft.DeloraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.DeloraConfig</name><anchor>peft.DeloraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/delora/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 8"}, {"name": "delora_lambda", "val": ": int = 15"}, {"name": "module_dropout", "val": ": float = 0.0"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "rank_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "lambda_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  The rank of the DeLoRA adapter.
- **delora_lambda** (`int`) --
  The initial value of the boundary of the DeLoRA adapter. This variable sets an upper bound to the Frobenius
  norm of the weight change, avoiding the finetuned model to deviate too much from the original model.
- **module_dropout** (`float`) --
  The dropout probability for disabling DeLoRA modules during training.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen,
  excluding the output layer. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **bias** (`str`) --
  Bias type for DeLoRA. Can be 'none', 'all' or 'delora_only'. If 'all' or 'delora_only', the corresponding
  biases will be updated during training. Be aware that this means that, even when disabling the adapters,
  the model will not produce the same output as the base model would have without adaptation.
- **init_weights** (`bool`) --
  Whether to perform initialization of adapter weights. If `True` (default): A is initialized with kaiming
  uniform initialization, while B is initialized with zeros. If `False`: A and B are both initialized with
  kaiming uniform, immediately contributing a non-zero delta. This is generally discouraged for normal use.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **rank_pattern** (`dict`) --
  The mapping from layer names or regexp expression to ranks which are different from the default rank
  specified by `r`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **lambda_pattern** (`dict`) --
  The mapping from layer names or regexp expression to lambdas which are different from the default lambda
  specified by `delora_lambda`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **modules_to_save** (`Optional[List[str]]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [DeloraModel](/docs/peft/v0.18.0.rc0/en/package_reference/delora#peft.DeloraModel).




</div>

## DeloraModel[[peft.DeloraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.DeloraModel</name><anchor>peft.DeloraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/delora/model.py#L28</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([DeloraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/delora#peft.DeloraConfig)) -- The configuration of the DeLoRA model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The DeLoRA model.</retdesc></docstring>

Creates DeLoRA model from a pretrained transformers model.

The method is described in detail in [TODO].







**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([DeloraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/delora#peft.DeloraConfig)): The configuration of the DeLoRA model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/delora.md" />

### RoAd
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/road.md

# RoAd

[RoAd](https://arxiv.org/pdf/2409.00119) is a parameter‑efficient fine‑tuning technique that adapts large language models by learning a small set of 2×2 rotation matrices (and optional scaling factors) applied to pairs of hidden dimensions. RoAd achieves competitive or superior performance compared to other PEFT methods with under 0.1% trainable parameters. Unlike LoRA’s batched low‑rank updates, RoAd’s sparse rotations reformulate to simple element‑wise operations, yielding significantly higher serving throughput when handling heterogeneous requests in the same batch, i.e. serving multiple adapters simulatenously. Moreover, RoAd integrates seamlessly into a distributed interchange intervention framework, interpreting its sparse 2D rotations as task-specific interventions within learned subspaces of hidden representations. These orthogonal subspaces can be composed to merge multiple task-specific behaviors—like multilingual capabilities or instruction following—without additional fine-tuning, enabling modular, interpretable adaptations in LLMs.

Finetuning with RoAd typically requires higher learning rate compared to LoRA or similar methods, around 1e-3. Currently RoAd only supports linear layers and it can be used on models quantized with bitsandbytes (4-bit or 8-bit).

For running inference with different RoAd adapters in the same batch see [Inference with different LoRA adapters in the same batch](../developer_guides/lora#inference-with-different-lora-adapters-in-the-same-batch).

## RoadConfig[[peft.RoadConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.RoadConfig</name><anchor>peft.RoadConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/road/config.py#L28</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "variant", "val": ": Union[str, RoadVariant] = 'road_1'"}, {"name": "group_size", "val": ": int = 64"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **variant** (Union[`RoadVariant`, `str`]) --
  The variant of the Road model to use. It can be one of road_1, road_2, or road_4. Refer to the paper for
  more details.
  - road_1: Uses the same scale and angle for all pairs of elements.
  This variant has lowest number of parameters, it stores a number equal to the output hidden size of
  parameters for each layer that RoAd is applied to.
  - road_2: Uses the same scale and angle for each element.
  This variant has 2x the number of parameters compared to road_1.
  - road_4: Uses two different scales and angles for each ellement.
  This variant has 4x the number of parameters compared to road_1.
- **group_size** (`int`) --
  Group size defines how elements are grouped together into 2D vectors for rotation. Within each group
  element 0 is paired with element group_size/2, then element 1 is paired with element group_size/2+1 and so
  on. This has no effect on the model performance, since elements are unordered, however it has some effect
  on inference speed when used in e.g. VLLM. For best speed group size of at least 32 or 64 (the default) is
  recommended. Note that model hidden size (or hidden size per partition when used with tensor parallelism)
  must be divisible by group_size, so for very small models you might need to reduce this parameter.
- **init_weights** (`bool`) --
  Whether to perform initialization of RoAd weights.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen (if
  the model is a PreTrainedModel, the output layer excluded). If this is not specified, modules will be
  chosen according to the model architecture. If the architecture is not known, an error will be raised -- in
  this case, you should specify the target modules manually.
- **modules_to_save** (`List[str]`) --
  List of modules apart from Road layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [RoadModel](/docs/peft/v0.18.0.rc0/en/package_reference/road#peft.RoadModel). RoAd adapter is proposed in
https://arxiv.org/pdf/2409.00119.




</div>

## RoadModel[[peft.RoadModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.RoadModel</name><anchor>peft.RoadModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/road/model.py#L38</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/road.md" />

### OFT
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/oft.md

# OFT

[Orthogonal Finetuning (OFT)](https://hf.co/papers/2306.07280) is a method developed for adapting text-to-image diffusion models. It works by reparameterizing the pretrained weight matrices with its orthogonal matrix to preserve information in the pretrained model. To reduce the number of parameters, OFT introduces a block-diagonal structure in the orthogonal matrix.

The abstract from the paper is:

*Large text-to-image diffusion models have impressive capabilities in generating photorealistic images from text prompts. How to effectively guide or control these powerful models to perform different downstream tasks becomes an important open problem. To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks. Unlike existing methods, OFT can provably preserve hyperspherical energy which characterizes the pairwise neuron relationship on the unit hypersphere. We find that this property is crucial for preserving the semantic generation ability of text-to-image diffusion models. To improve finetuning stability, we further propose Constrained Orthogonal Finetuning (COFT) which imposes an additional radius constraint to the hypersphere. Specifically, we consider two important finetuning text-to-image tasks: subject-driven generation where the goal is to generate subject-specific images given a few images of a subject and a text prompt, and controllable generation where the goal is to enable the model to take in additional control signals. We empirically show that our OFT framework outperforms existing methods in generation quality and convergence speed*.

## OFTConfig[[peft.OFTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.OFTConfig</name><anchor>peft.OFTConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/oft/config.py#L28</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 0"}, {"name": "oft_block_size", "val": ": int = 32"}, {"name": "module_dropout", "val": ": float = 0.0"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": Literal['none', 'all', 'oft_only'] = 'none'"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "coft", "val": ": bool = False"}, {"name": "eps", "val": ": float = 6e-05"}, {"name": "block_share", "val": ": bool = False"}, {"name": "use_cayley_neumann", "val": ": bool = True"}, {"name": "num_cayley_neumann_terms", "val": ": int = 5"}]</parameters><paramsdesc>- **r** (`int`) -- OFT rank, number of OFT blocks per injected layer.
- **oft_block_size** (`int`) -- OFT block size across different layers.
- **module_dropout** (`float`) --
  The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the
  dropout layer in LoRA.
- **target_modules** (`Optional[Union[list[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding
  the output layer. If this is not specified, modules will be chosen according to the model architecture. If
  the architecture is not known, an error will be raised -- in this case, you should specify the target
  modules manually.
- **fan_in_fan_out** (`bool`) -- Set this to True if the layer to replace stores weight like (fan_in, fan_out).
- **bias** (`str`) -- Bias type for OFT. Can be 'none', 'all' or 'oft_only'. If 'all' or 'oft_only', the
  corresponding biases will be updated during training. Be aware that this means that, even when disabling
  the adapters, the model will not produce the same output as the base model would have without adaptation.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (`bool`) --
  Whether to perform initialization of OFT weights.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **modules_to_save** (`List[str]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.
- **coft** (`bool`) --
  Whether to use the constrained variant of OFT or not, off by default.
- **eps** (`float`) --
  The control strength of COFT. The freedom of rotation. Only has an effect if `coft` is set to True.
- **block_share** (`bool`) --
  Whether to share the OFT parameters between blocks or not. This is `False` by default.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [OFTModel](/docs/peft/v0.18.0.rc0/en/package_reference/oft#peft.OFTModel).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>check_kwargs</name><anchor>peft.OFTConfig.check_kwargs</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/oft/config.py#L184</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments passed along to the child class initialization.</paramsdesc><paramgroups>0</paramgroups></docstring>

Check if the kwargs are valid for the configuration.




</div></div>

## OFTModel[[peft.OFTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.OFTModel</name><anchor>peft.OFTModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/oft/model.py#L34</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([OFTConfig](/docs/peft/v0.18.0.rc0/en/package_reference/oft#peft.OFTConfig)) -- The configuration of the OFT model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The OFT model.</retdesc></docstring>

Creates Orthogonal Finetuning model from a pretrained model. The method is described in
https://huggingface.co/papers/2306.07280







<ExampleCodeBlock anchor="peft.OFTModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import OFTModel, OFTConfig

>>> config_te = OFTConfig(
...     r=8,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     module_dropout=0.0,
...     init_weights=True,
... )
>>> config_unet = OFTConfig(
...     r=8,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     module_dropout=0.0,
...     init_weights=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = OFTModel(model.text_encoder, config_te, "default")
>>> model.unet = OFTModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([OFTConfig](/docs/peft/v0.18.0.rc0/en/package_reference/oft#peft.OFTConfig)): The configuration of the OFT model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/oft.md" />

### Trainable Tokens
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/trainable_tokens.md

# Trainable Tokens

The Trainable Tokens method provides a way to target specific token embeddings for fine-tuning without resorting to
training the full embedding matrix or using an adapter on the embedding matrix. It is based on the initial implementation from
[here](https://github.com/huggingface/peft/pull/1541).

The method only targets specific tokens and selectively trains the token indices you specify. Consequently the
required RAM will be lower and disk memory is also significantly lower than storing the full fine-tuned embedding matrix.

Some preliminary benchmarks acquired with [this script](https://github.com/huggingface/peft/blob/main/scripts/train_memory.py)
suggest that for `gemma-2-2b` (which has a rather large embedding matrix) you can save ~4 GiB VRAM with Trainable Tokens
over fully fine-tuning the embedding matrix. While LoRA will use comparable amounts of VRAM it might also target
tokens you don't want to be changed. Note that these are just indications and varying embedding matrix sizes might skew
these numbers a bit.

Note that this method does not add tokens for you, you have to add tokens to the tokenizer yourself and resize the
embedding matrix of the model accordingly. This method will only re-train the embeddings for the tokens you specify.
This method can also be used in conjunction with LoRA layers! See [the LoRA developer guide](../developer_guides/lora#efficiently-train-tokens-alongside-lora).

> [!TIP]
> Saving the model with [save_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.save_pretrained) or retrieving the state dict using
> [get_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model_state_dict) when adding new tokens may save the full embedding matrix instead of only the difference
> as a precaution because the embedding matrix was resized. To save space you can disable this behavior by setting
> `save_embedding_layers=False` when calling `save_pretrained`. This is safe to do as long as you don't modify the
> embedding matrix through other means as well, as such changes will be not tracked by trainable tokens.

## TrainableTokensConfig[[peft.TrainableTokensConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.TrainableTokensConfig</name><anchor>peft.TrainableTokensConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/trainable_tokens/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "token_indices", "val": ": list[int] = <factory>"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool = True"}]</parameters><paramsdesc>- **token_indices** (`list[int]`) --
  List of integers, signifying the indices of the tokens you want to be trainable. To find the index of a
  token with a tokenizer, you can tokenize the string and look at the returned `input_ids`. The closer the
  amount of indices is to the total amount of tokens, the less efficient this method gets.
- **target_modules** (`Optional[Union[list[str], str]]`) --
  List of module names or regex expression of the module names to replace with our `TrainableTokensLayer`. If
  not defined, it will attempt to get the model's input embedding layer if the model has a
  `get_input_embeddings` method (transformer models usually do), if that fails the default is 'embed_tokens'.
  Other example targets are `embedding`, `encoder.embeddings` or `decoder.embeddings`.
- **init_weights** (`bool`) --
  By default the new token weights are initialized to be the same as the respective token embeddings. This
  makes TrainableTokens a no-op when not trained. If set to `False` the weights will be random values. Do not
  change this setting unless you know exactly what you're doing.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for the `TrainableTokens` method.

Allows for training new tokens (and re-training existing ones) without training the full embedding matrix. By
marking a few select tokens (identified by their indices) trainable and leaving the rest untouched, this method can
be used to add new tokens or changing the embedding of existing tokens while saving on memory. Both storage as well
as working memory usage are reduced in contrast to training the embedding matrix fully.

Note that training with FSDP/DeepSpeed might not yet be fully supported.




</div>

## TrainableTokensModel[[peft.TrainableTokensModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.TrainableTokensModel</name><anchor>peft.TrainableTokensModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/trainable_tokens/model.py#L26</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/trainable_tokens.md" />

### Bone
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/bone.md

# Bone

DiSHA: Dimension-Sharding Adaptation ([DiSHA](https://huggingface.co/papers/2409.15371)) We introduce Dimension-Sharding Adaptation (DiSHA), which expands the PEFT design space to unlock lower intrinsic ranks and faster convergence by default. Building on DiSHA, we propose an efficient algorithm called Block-Affine Adaptation (Bone) structure and a non-linear update method called Block Affine Transformation Adaptation (BAT).


The abstract from the paper is:

Low-Rank Adaptation (LoRA) leverages the low intrinsic rank of weight updates in Large Language Models (LLMs), establishing a Parameter-Efficient Fine-Tuning (PEFT) paradigm. However, LoRA suffers from slow convergence. We introduce Dimension-Sharding Adaptation (DiSHA), which expands the PEFT design space to unlock lower intrinsic ranks and faster convergence by default. Within DiSHA's design space, we propose Block Affine Adaptation (Bone), a computationally efficient structure that delivers both high performance and efficiency. While certain DiSHA configurations may result in colinear updates to weight shards, we address this with Block Affine Transformation Adaptation (BAT), a nonlinear variant of DiSHA. BAT introduces nonlinearity by combining trainable matrices with original weight shards in a nonlinear manner, inducing nonlinearity in matrix updates without introducing additional parameters. Empirical results show that Bone, under the DiSHA framework, consistently outperforms LoRA variants in both NLG and NLU tasks, with significantly improved computational efficiency. Further analysis demonstrates that BAT enhances model capabilities by leveraging its nonlinear design.


## BoneConfig[[peft.BoneConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.BoneConfig</name><anchor>peft.BoneConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/bone/config.py#L26</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 64"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "init_weights", "val": ": bool | Literal['bat'] = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[str] = None"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  The rank of Bone across different layers. It is best to set 'r' to an even number; otherwise, the default
  initialization method will not work.
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding
  the output layer. If this is not specified, modules will be chosen according to the model architecture. If
  the architecture is not known, an error will be raised -- in this case, you should specify the target
  modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **init_weights** (bool | Literal["bat"]) --
  Different initializations correspond to different Bone variants. By default, setting True uses the Bone
  structure, while "bat" selects the Bat structure.
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`str`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`.
- **modules_to_save** (`List[str]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [BoneModel](/docs/peft/v0.18.0.rc0/en/package_reference/bone#peft.BoneModel).




</div>

## BoneModel[[peft.BoneModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.BoneModel</name><anchor>peft.BoneModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/bone/model.py#L24</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to which the adapter tuner layers will be attached.
- **config** ([BoneConfig](/docs/peft/v0.18.0.rc0/en/package_reference/bone#peft.BoneConfig)) -- The configuration of the Bone model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The Bone model.</retdesc></docstring>

Creates Householder reflection adaptation (Bone) model from a pretrained model. The method is described in
https://huggingface.co/papers/2409.15371







<ExampleCodeBlock anchor="peft.BoneModel.example">

Example:
```py
>>> from diffusers import StableDiffusionPipeline
>>> from peft import BoneModel, BoneConfig

>>> config_te = BoneConfig(
...     r=8,
...     target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
...     init_weights=True,
... )
>>> config_unet = BoneConfig(
...     r=8,
...     target_modules=[
...         "proj_in",
...         "proj_out",
...         "to_k",
...         "to_q",
...         "to_v",
...         "to_out.0",
...         "ff.net.0.proj",
...         "ff.net.2",
...     ],
...     init_weights=True,
... )

>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = BoneModel(model.text_encoder, config_te, "default")
>>> model.unet = BoneModel(model.unet, config_unet, "default")
```

</ExampleCodeBlock>

**Attributes**:
- **model** (`~torch.nn.Module`) -- The model to be adapted.
- **peft_config** ([BoneConfig](/docs/peft/v0.18.0.rc0/en/package_reference/bone#peft.BoneConfig)): The configuration of the Bone model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/bone.md" />

### X-LoRA
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/xlora.md

# X-LoRA

Mixture of LoRA Experts ([X-LoRA](https://huggingface.co/papers/2402.07148)) is a PEFT method enabling sparse or dense mixture of LoRA experts based on a high granularity (token, layer, sequence) scalings matrix. This leverages frozen LoRA adapters and a frozen base model to drastically reduces the number of parameters that need to be fine-tuned.

A unique aspect of X-LoRA is its versatility: it can be applied to any `transformers` base model with LoRA adapters. This means that, despite the mixture of experts strategy, no changes to the model code must be made.

The below graphic demonstrates how the scalings change for different prompts for each token. This highlights the activation of different adapters as the generation progresses and the sequence creates new context.

![Token-by-token scalings](https://github.com/EricLBuehler/xlora/raw/master/res/token_by_token_scalings.gif)

The abstract from the paper is:

*We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks. The design is inspired by the biological principles of universality and diversity, where neural network building blocks are reused in different hierarchical manifestations. Hence, the X-LoRA model can be easily implemented for any existing large language model (LLM) without a need for modifications of the underlying structure. We develop a tailored X-LoRA model that offers scientific capabilities including forward/inverse analysis tasks and enhanced reasoning capability, focused on biomaterial analysis, protein mechanics and design. The impact of this work include access to readily expandable and adaptable models with strong domain knowledge and the capability to integrate across areas of knowledge. Featuring experts in biology, mathematics, reasoning, bio-inspired materials, mechanics and materials, chemistry, protein biophysics, mechanics and quantum-mechanics based molecular properties, we conduct a series of physics-focused case studies. We examine knowledge recall, protein mechanics forward/inverse tasks, protein design, adversarial agentic modeling including ontological knowledge graph construction, as well as molecular design. The model is capable not only of making quantitative predictions of nanomechanical properties of proteins or quantum mechanical molecular properties, but also reasons over the results and correctly predicts likely mechanisms that explain distinct molecular behaviors.*.

Please cite X-LoRA as:
```bibtex
@article{10.1063/5.0203126,
    author = {Buehler, Eric L. and Buehler, Markus J.},
    title = "{X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design}",
    journal = {APL Machine Learning},
    volume = {2},
    number = {2},
    pages = {026119},
    year = {2024},
    month = {05},
    abstract = "{We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks. The design is inspired by the biological principles of universality and diversity, where neural network building blocks are reused in different hierarchical manifestations. Hence, the X-LoRA model can be easily implemented for any existing large language model without a need for modifications of the underlying structure. We develop a tailored X-LoRA model that offers scientific capabilities, including forward/inverse analysis tasks and enhanced reasoning capability, focused on biomaterial analysis, protein mechanics, and design. The impact of this work includes access to readily expandable and adaptable models with strong domain knowledge and the capability to integrate across areas of knowledge. Featuring experts in biology, mathematics, reasoning, bio-inspired materials, mechanics and materials, chemistry, protein biophysics, mechanics, and quantum-mechanics based molecular properties, we conduct a series of physics-focused case studies. We examine knowledge recall, protein mechanics forward/inverse tasks, protein design, adversarial agentic modeling including ontological knowledge graph construction, and molecular design. The model is capable not only of making quantitative predictions of nanomechanical properties of proteins or quantum mechanical molecular properties but also reasoning over the results and correctly predicting likely mechanisms that explain distinct molecular behaviors.}",
    issn = {2770-9019},
    doi = {10.1063/5.0203126},
    url = {https://doi.org/10.1063/5.0203126},
    eprint = {https://pubs.aip.org/aip/aml/article-pdf/doi/10.1063/5.0203126/19964043/026119\_1\_5.0203126.pdf},
}
```

## XLoraConfig[[peft.XLoraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.XLoraConfig</name><anchor>peft.XLoraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "hidden_size", "val": ": int = None"}, {"name": "adapters", "val": ": dict[str, str] = None"}, {"name": "enable_softmax", "val": ": bool = True"}, {"name": "enable_softmax_topk", "val": ": bool = False"}, {"name": "layerwise_scalings", "val": ": bool = False"}, {"name": "xlora_depth", "val": ": int = 1"}, {"name": "xlora_size", "val": ": int = 2048"}, {"name": "xlora_dropout_p", "val": ": float = 0.2"}, {"name": "use_trainable_adapters", "val": ": bool = False"}, {"name": "softmax_temperature", "val": ": float = 1.0"}, {"name": "top_k_lora", "val": ": Optional[int] = None"}, {"name": "scaling_pass_value", "val": ": float = 0.0"}, {"name": "global_scaling_weight", "val": ": float = 1.0"}]</parameters><paramsdesc>- **hidden_size** (`int`) --
  Hidden size of the base model.
- **adapters** (`dict`) --
  Mapping of adapter names to the LoRA adapter id, as per PeftModel.load_adapter. *They will be automatically
  loaded*, to use as LoRA experts. When using from_pretrained, pass the new adapters dict as a keyword
  argument.
- **enable_softmax** (`bool`, *optional*, defaults to `True`) --
  Enable softmax application for the X-LoRA classifier.
- **enable_softmax_topk** (`bool`, *optional*, defaults to `False`) --
  Enable softmax application for the top-k LoRA adapters. Mutually exclusive to `enable_softmax` and must
  only be set if `top_k_lora` is.
- **softmax_temperature** (`float`, *optional*, defaults to 1.0) --
  Softmax temperature, lower yields sharper predictions
- **layerwise_scalings** (`bool`, *optional*, defaults to `False`) --
  If True, generate scalings for each LoRA adapter (each layer). If this is False, then scalings will be
  broadcasted, the same, to each layer.
- **top_k_lora** (`int`, *optional*, defaults to None) --
  Sparsely select the top_k LoRA experts instead of the default dense method.
- **xlora_depth** (`int`, *optional*, defaults to 1) --
  Depth of the X-LoRA classifier.
- **xlora_size** (`int`, *optional*, defaults to 2048) --
  Hidden size of the X-LoRA classifier, irrelevant if `xlora_depth=1`.
- **xlora_dropout_p** (`float`, *optional*, defaults to 0.2) --
  Dropout probability of the X-LoRA classifier, irrelevant if `xlora_depth=1`.
- **use_trainable_adapters** (`bool`, *optional*, defaults to False) --
  Make the adapters trainable.
- **scaling_pass_value** (`float`, *optional*, defaults to 0) --
  Scaling pass value.
- **global_scaling_weight** (`float`, *optional*, defaults to 1) --
  Weight to multiply output of each LoRA adapter by.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a `XLoraModel`. When the config is reloaded, the
paths of the `adapters` field is disregarded in favor of the saved adapters. As such, only the keys matter during
loading.




</div>

## XLoraModel[[peft.XLoraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.XLoraModel</name><anchor>peft.XLoraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L156</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "config", "val": ": Union[dict[str, XLoraConfig], XLoraConfig]"}, {"name": "adapter_name", "val": ": str"}, {"name": "torch_device", "val": ": Optional[str] = None"}, {"name": "ephemeral_gpu_offload", "val": ": bool = False"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([XLoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/xlora#peft.XLoraConfig)) -- The configuration of the Lora model.
- **adapter_name** (`str`) -- The name of the adapter, does not affect the LoRA adapter names.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The X-LoRA model.</retdesc></docstring>

Creates an X-LoRA (Mixture of LoRA experts), model from a pretrained transformers model. Currently, this X-LoRA
implementation only works with models with a transformer architecture.

The method is described in detail in https://huggingface.co/papers/2402.07148.







<ExampleCodeBlock anchor="peft.XLoraModel.example">

Example:
```py
>>> from transformers import AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig
>>> from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training

>>> model_config = AutoConfig.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
>>> config = XLoraConfig(
...     task_type="CAUSAL_LM",
...     hidden_size=model_config.hidden_size,
...     xlora_depth=4,
...     adapters={
...         "adapter_1": "./path/to/the/checkpoint/",
...         "adapter_2": "./path/to/the/checkpoint/",
...         "adapter_n": "./path/to/the/checkpoint/",
...     },
... )
>>> int8_config = BitsAndBytesConfig(load_in_8bit=True)
>>> model = AutoModelForCausalLM.from_pretrained(
...     "mistralai/Mistral-7B-Instruct-v0.1",
...     trust_remote_code=True,
...     attn_implementation="flash_attention_2",
...     device_map="cuda:0",
...     torch_dtype=torch.bfloat16,
...     quantization_config=int8_config,
... )
>>> model = prepare_model_for_kbit_training(4)
>>> xlora_model = get_peft_model(model, config)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>clear_scalings_log</name><anchor>peft.XLoraModel.clear_scalings_log</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L510</source><parameters>[]</parameters></docstring>

Clear the scalings log.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_scalings_logging</name><anchor>peft.XLoraModel.disable_scalings_logging</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L503</source><parameters>[]</parameters></docstring>

Disable scalings logging, without clearing the log.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_scalings_logging</name><anchor>peft.XLoraModel.enable_scalings_logging</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L496</source><parameters>[]</parameters></docstring>

Enable scalings logging.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_bucketed_scalings_log</name><anchor>peft.XLoraModel.get_bucketed_scalings_log</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L517</source><parameters>[]</parameters></docstring>

Returns bucketed scalings, bucketed by seq_len. Each value consists of the positions (the first) and the
associated tensors. The positions are paired with the associated tensors and give the position in the scaling
log.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_global_scaling_weight</name><anchor>peft.XLoraModel.get_global_scaling_weight</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L473</source><parameters>[]</parameters></docstring>

Get the global LoRA weight.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_latest_scalings</name><anchor>peft.XLoraModel.get_latest_scalings</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L480</source><parameters>[]</parameters></docstring>

Returns the latest scalings prediction, or None if no scalings have been predicted. The tensor is of shape
(batch_size, seq_len, n_layers, n_classes).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_scalings_log</name><anchor>peft.XLoraModel.get_scalings_log</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L487</source><parameters>[]</parameters></docstring>

Returns a shallow (only copying the list itself not the tensors) copy of the list containing the scalings log.
Editing the list does not change the underlying log. The tensors are of shape (batch_size, seq_len, n_layers,
n_classes). The seq_len dim may vary with input dimension.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_global_scaling_weight</name><anchor>peft.XLoraModel.set_global_scaling_weight</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L457</source><parameters>[{"name": "weight", "val": ": float"}]</parameters></docstring>

Set the global LoRA weight, a scalar to multiply the output of each LoRA adapter by. This is by default 1. This
is reflected in the config.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_scaling_pass_value</name><anchor>peft.XLoraModel.set_scaling_pass_value</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L465</source><parameters>[{"name": "value", "val": ": float | None"}]</parameters></docstring>

Set the scaling pass value, the value to set the scalings to during the scaling pass. If the value is None, the
scaling pass value will be 1/n where n is the number of adapters.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_topk_lora</name><anchor>peft.XLoraModel.set_topk_lora</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/xlora/model.py#L449</source><parameters>[{"name": "value", "val": ": Optional[int]"}]</parameters></docstring>

Sparsely select the specified top_k LoRA experts instead of the default dense method. Set to None to use dense.
This is reflected in the config.


</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/xlora.md" />

### Llama-Adapter
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/llama_adapter.md

# Llama-Adapter

[Llama-Adapter](https://hf.co/papers/2303.16199) is a PEFT method specifically designed for turning Llama into an instruction-following model. The Llama model is frozen and only a set of adaptation prompts prefixed to the input instruction tokens are learned. Since randomly initialized modules inserted into the model can cause the model to lose some of its existing knowledge, Llama-Adapter uses zero-initialized attention with zero gating to progressively add the instructional prompts to the model.

The abstract from the paper is:

*We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the input text tokens at higher transformer layers. Then, a zero-init attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With efficient training, LLaMA-Adapter generates high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Furthermore, our approach can be simply extended to multi-modal input, e.g., images, for image-conditioned LLaMA, which achieves superior reasoning capacity on ScienceQA. We release our code at https://github.com/ZrrSkywalker/LLaMA-Adapter*.

## AdaptionPromptConfig[[peft.AdaptionPromptConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AdaptionPromptConfig</name><anchor>peft.AdaptionPromptConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "target_modules", "val": ": str = None"}, {"name": "adapter_len", "val": ": int = None"}, {"name": "adapter_layers", "val": ": int = None"}]</parameters></docstring>
Stores the configuration of an [AdaptionPromptModel](/docs/peft/v0.18.0.rc0/en/package_reference/llama_adapter#peft.AdaptionPromptModel).

</div>

## AdaptionPromptModel[[peft.AdaptionPromptModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AdaptionPromptModel</name><anchor>peft.AdaptionPromptModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/model.py#L25</source><parameters>[{"name": "model", "val": ""}, {"name": "configs", "val": ": dict"}, {"name": "adapter_name", "val": ": str"}]</parameters></docstring>

Implements adaption prompts as described in https://huggingface.co/papers/2303.16199.

The top L attention modules are replaced with AdaptedAttention modules that wrap the original ones, but insert
trainable prompts with gates (for zero init).

Notes on the multi-adapter pattern:
- We store the states of different adapters by keeping a dictionary of AdaptedAttention modules indexed by adapter
  name.
- Every time we switch adapters, we remove the modules of the currently active adapter from the model, store them
  in the dictionary, and replace them with the modules of the new adapter.
- To avoid duplicated and potentially inconsistent state, the currently active adapter is always removed from the
  dictionary.
- Disabling the adapter would also result in the modules being removed from the model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_adapter</name><anchor>peft.AdaptionPromptModel.add_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/model.py#L60</source><parameters>[{"name": "adapter_name", "val": ": str"}, {"name": "config", "val": ": AdaptionPromptConfig"}]</parameters></docstring>
Add an adapter with the given name and config.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_adapter_layers</name><anchor>peft.AdaptionPromptModel.disable_adapter_layers</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/model.py#L113</source><parameters>[]</parameters></docstring>
Disable adapter layers by swapping out AdaptedAttention modules.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_adapter_layers</name><anchor>peft.AdaptionPromptModel.enable_adapter_layers</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/model.py#L108</source><parameters>[]</parameters></docstring>
Enable adapter layers by swapping in cached AdaptedAttention modules.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>peft.AdaptionPromptModel.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adaption_prompt/model.py#L95</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters></docstring>
Set the model to use the adapter with the given name.

</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/llama_adapter.md" />

### Models
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/peft_model.md

# Models

[PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub.

## PeftModel[[peft.PeftModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModel</name><anchor>peft.PeftModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L72</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The base transformer model used for Peft.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- The configuration of the Peft model.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading loading process.

  > [!TIP] > Don't use `low_cpu_mem_usage=True` when creating a new PEFT adapter for training.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base model encompassing various Peft methods.



**Attributes**:
- **base_model** (`torch.nn.Module`) -- The base transformer model used for Peft.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- The configuration of the Peft model.
- **modules_to_save** (`list` of `str`) -- The list of sub-module names to save when
  saving the model.
- **prompt_encoder** ([PromptEncoder](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoder)) -- The prompt encoder used for Peft if
  using [PromptLearningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PromptLearningConfig).
- **prompt_tokens** (`torch.Tensor`) -- The virtual prompt tokens used for Peft if
  using [PromptLearningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PromptLearningConfig).
- **transformer_backbone_name** (`str`) -- The name of the transformer
  backbone in the base model if using [PromptLearningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PromptLearningConfig).
- **word_embeddings** (`torch.nn.Embedding`) -- The word embeddings of the transformer backbone
  in the base model if using [PromptLearningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PromptLearningConfig).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_adapter</name><anchor>peft.PeftModel.add_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L977</source><parameters>[{"name": "adapter_name", "val": ": str"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_name** (`str`) --
  The name of the adapter to be added.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) --
  The configuration of the adapter to be added.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the process when loading saved
  adapters. Don't use this option when creating a new PEFT adapter for training.</paramsdesc><paramgroups>0</paramgroups></docstring>

Add an adapter to the model based on the passed configuration.

This adapter is not trained. To load a trained adapter, check out [PeftModel.load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter).

The name for the new adapter should be unique.

The new adapter is not automatically set as the active adapter. Use [PeftModel.set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.set_adapter) to set the active
adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_or_update_model_card</name><anchor>peft.PeftModel.create_or_update_model_card</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1512</source><parameters>[{"name": "output_dir", "val": ": str"}]</parameters></docstring>

Updates or create model card to include information about peft:
1. Adds `peft` library tag
2. Adds peft version
3. Adds base model info
4. Adds quantization information if it was used


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapter</name><anchor>peft.PeftModel.delete_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1045</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters><paramsdesc>- **adapter_name** (str) -- Name of the adapter to be deleted.</paramsdesc><paramgroups>0</paramgroups></docstring>

Deletes an existing adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_adapter</name><anchor>peft.PeftModel.disable_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L923</source><parameters>[]</parameters></docstring>

Context manager that disables the adapter module. Use this to run inference on the base model.

<ExampleCodeBlock anchor="peft.PeftModel.disable_adapter.example">

Example:

```py
>>> with model.disable_adapter():
...     model(inputs)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>peft.PeftModel.forward</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L902</source><parameters>[{"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>

Forward pass of the model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>peft.PeftModel.from_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L375</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "model_id", "val": ": Union[str, os.PathLike]"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "is_trainable", "val": ": bool = False"}, {"name": "config", "val": ": Optional[PeftConfig] = None"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "ephemeral_gpu_offload", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "key_mapping", "val": ": Optional[dict[str, str]] = None"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to be adapted. For 🤗 Transformers models, the model should be initialized with the
  [from_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.from_pretrained).
- **model_id** (`str` or `os.PathLike`) --
  The name of the PEFT configuration to use. Can be either:
  - A string, the `model id` of a PEFT configuration hosted inside a model repo on the Hugging Face
    Hub.
  - A path to a directory containing a PEFT configuration file saved using the `save_pretrained`
    method (`./my_peft_config_directory/`).
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter to be loaded. This is useful for loading multiple adapters.
- **is_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be
  used for inference.
- **config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig), *optional*) --
  The configuration object to use instead of an automatically loaded configuration. This configuration
  object is mutually exclusive with `model_id` and `kwargs`. This is useful when configuration is already
  loaded before calling `from_pretrained`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Only relevant for specific adapter types.
- **ephemeral_gpu_offload** (`bool`, *optional*) --
  Whether to use ephemeral GPU offloading for partially loaded modules. Defaults to `False`. This is
  useful when parts of the model and/or components (such as adapters) are kept in CPU memory until they
  are needed. Rather than perform expensive operations on small data, the data is transferred to the GPU
  on-demand, the operation(s) performed, and the results moved back to CPU memory. This brings a slight
  momentary VRAM overhead but gives orders of magnitude speedup in certain cases.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device before loading the saved weights. Useful to speed up the
  process.
- **torch_device** (`str`, *optional*, defaults to None) --
  The device to load the adapter on. If `None`, the device will be inferred.
- **key_mapping** (dict, *optional*, defaults to None) --
  Extra mapping of PEFT `state_dict` keys applied before loading the `state_dict`. When this mapping is
  applied, the PEFT-specific `"base_model.model"` prefix is removed beforehand and the adapter name (e.g.
  `"default"`) is not inserted yet. Only pass this argument if you know what you're doing.
- **kwargs** -- (`optional`):
  Additional keyword arguments passed along to the specific PEFT configuration class.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a PEFT model from a pretrained model and loaded PEFT weights.

Note that the passed `model` may be modified inplace.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_base_model</name><anchor>peft.PeftModel.get_base_model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L971</source><parameters>[]</parameters></docstring>

Returns the base model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_layer_status</name><anchor>peft.PeftModel.get_layer_status</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1077</source><parameters>[]</parameters><paramsdesc>- **model** ([~PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) --
  The model to get the adapter layer status from.</paramsdesc><paramgroups>0</paramgroups><rettype>list`peft.peft_model.TunerLayerStatus`</rettype><retdesc>A list of dataclasses, each containing the status of the corresponding adapter layer.</retdesc></docstring>
Get the status of each adapter layer in the model.

This method returns a list of `TunerLayerStatus` dataclass instances, each of which contains the following
attributes:

- `name` (`str`):
  The name of the adapter layer, e.g. `model.encoder.block.0.layer.0.SelfAttention.q`.
- `module_type` (`str`):
  The type of the adapter layer, e.g. `lora.Linear`.
- `enabled` (`bool`):
  Whether the adapter layer is enabled.
- `active_adapters` (`list[str]`):
  The names of the active adapters, if any, e.g. `["default"]`.
- `merged_adapters` (`list[str]`):
  The names of the merged adapters, if any, e.g. `["default"]`.
- `available_adapters` (`list[str]`):
  The names of the available adapters, e.g. `["default"]`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_status</name><anchor>peft.PeftModel.get_model_status</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1107</source><parameters>[]</parameters><paramsdesc>- **model** ([~PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) --
  The model to get the adapter layer status from.</paramsdesc><paramgroups>0</paramgroups><rettype>`peft.peft_model.TunerModelStatus`</rettype><retdesc>A dataclass containing the status of the model.</retdesc></docstring>
Get the status of tuners of the model.

This method returns a `TunerModelStatus` dataclass instance, which contains the following attributes:

- `base_model_type` (`str`):
  The type of the base model, e.g. `T5Model`.
- `adapter_model_type` (`str`):
  The type of the adapter model, e.g. `LoraModel`.
- `peft_types` (`dict[str, str]`):
  The mapping of adapter name to adapter type, e.g. `{"default": "LORA"}`.
- `trainable_params` (`int`):
  The number of trainable parameters in the model.
- `total_params` (`int`):
  The total number of parameters in the model.
- `num_adapter_layers` (`int`):
  The number of adapter layers in the model.
- `enabled` (`bool`, `Literal["irregular"]`):
  Whether all adapter layers are enabled. If some are enabled and some are not, this will be `"irregular"`.
  This means that your model is in an inconsistent state and might not work as expected.
- `active_adapters` (`list[str]`, `Literal["irregular"]`):
  The names of the active adapters. If the active adapters are not consistent across all layers, this will be
  `"irregular"`, which means that your model is in an inconsistent state and might not work as expected.
- `merged_adapters` (`list[str]`, `Literal["irregular"]`):
  The names of the merged adapters. If the merged adapters are not consistent across all layers, this will be
  `"irregular"`, which means that your model is in an inconsistent state and might not work as expected.
- `available_adapters` (`list[str]`):
  The names of the available adapters, e.g. `["default"]`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_nb_trainable_parameters</name><anchor>peft.PeftModel.get_nb_trainable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L833</source><parameters>[]</parameters></docstring>

Returns the number of trainable parameters and the number of all parameters in the model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_prompt</name><anchor>peft.PeftModel.get_prompt</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L708</source><parameters>[{"name": "batch_size", "val": ": int"}, {"name": "task_ids", "val": ": Optional[torch.Tensor] = None"}, {"name": "max_cache_len", "val": ": Optional[int] = None"}]</parameters></docstring>

Returns the virtual prompts to use for Peft. Only applicable when using a prompt learning method.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_prompt_embedding_to_save</name><anchor>peft.PeftModel.get_prompt_embedding_to_save</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L687</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters></docstring>

Returns the prompt embedding to save when saving the model. Only applicable when using a prompt learning
method.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_adapter</name><anchor>peft.PeftModel.load_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1271</source><parameters>[{"name": "model_id", "val": ": Union[str, os.PathLike]"}, {"name": "adapter_name", "val": ": str"}, {"name": "is_trainable", "val": ": bool = False"}, {"name": "torch_device", "val": ": Optional[str] = None"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "ephemeral_gpu_offload", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "key_mapping", "val": ": Optional[dict[str, str]] = None"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **model_id** (`str` or `os.PathLike`) --
  The name of the PEFT configuration to use. Can be either:
  - A string, the `model id` of a PEFT configuration hosted inside a model repo on the Hugging Face
    Hub.
  - A path to a directory containing a PEFT configuration file saved using the `save_pretrained`
    method (`./my_peft_config_directory/`).
- **adapter_name** (`str`) --
  The name of the adapter to be added.
- **is_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be
  used for inference.
- **torch_device** (`str`, *optional*, defaults to None) --
  The device to load the adapter on. If `None`, the device will be inferred.
- **autocast_adapter_dtype** (`bool`, *optional*, defaults to `True`) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter
  weights using float16 and bfloat16 to float32, as this is typically required for stable training, and
  only affect select PEFT tuners.
- **ephemeral_gpu_offload** (`bool`, *optional*, defaults to `False`) --
  Whether to use ephemeral GPU offloading for partially loaded modules. Defaults to `False`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device before loading the saved weights. Useful to speed up the
  process.
- **key_mapping** (dict, *optional*, defaults to None) --
  Extra mapping of PEFT `state_dict` keys applied before loading the `state_dict`. When this mapping is
  applied, the PEFT-specific `"base_model.model"` prefix is removed beforehand and the adapter name (e.g.
  `"default"`) is not inserted yet. Only pass this argument if you know what you're doing.
- **kwargs** -- (`optional`):
  Additional arguments to modify the way the adapter is loaded, e.g. the token for Hugging Face Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load a trained adapter into the model.

The name for the new adapter should be unique.

The new adapter is not automatically set as the active adapter. Use [PeftModel.set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.set_adapter) to set the active
adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_model_for_gradient_checkpointing</name><anchor>peft.PeftModel.prepare_model_for_gradient_checkpointing</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L665</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}]</parameters></docstring>

Prepares the model for gradient checkpointing if necessary


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>print_trainable_parameters</name><anchor>peft.PeftModel.print_trainable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L863</source><parameters>[]</parameters></docstring>

Prints the number of trainable parameters in the model.

Note: print_trainable_parameters() uses get_nb_trainable_parameters() which is different from
num_parameters(only_trainable=True) from huggingface/transformers. get_nb_trainable_parameters() returns
(trainable parameters, all parameters) of the Peft Model which includes modified backbone transformer model.
For techniques like LoRA, the backbone transformer model is modified in place with LoRA modules. However, for
prompt tuning, the backbone transformer model is unmodified. num_parameters(only_trainable=True) returns number
of trainable parameters of the backbone transformer model which can be different.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>peft.PeftModel.save_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L177</source><parameters>[{"name": "save_directory", "val": ": str"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "selected_adapters", "val": ": Optional[list[str]] = None"}, {"name": "save_embedding_layers", "val": ": Union[str, bool] = 'auto'"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "path_initial_model_for_weight_conversion", "val": ": Optional[str] = None"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **save_directory** (`str`) --
  Directory where the adapter model and configuration files will be saved (will be created if it does not
  exist).
- **safe_serialization** (`bool`, *optional*) --
  Whether to save the adapter files in safetensors format, defaults to `True`.
- **selected_adapters** (`List[str]`,  *optional*) --
  A list of adapters to be saved. If `None`, will default to all adapters.
- **save_embedding_layers** (`Union[bool, str]`, *optional*, defaults to `"auto"`) --
  If `True`, save the embedding layers in addition to adapter weights. If `auto`, checks the common
  embedding layers `peft.utils.other.EMBEDDING_LAYER_NAMES` in config's `target_modules` when available.
  and automatically sets the boolean flag. This only works for 🤗 transformers models.
- **is_main_process** (`bool`, *optional*) --
  Whether the process calling this is the main process or not. Will default to `True`. Will not save the
  checkpoint if not on the main process, which is important for multi device setups (e.g. DDP).
- **path_initial_model_for_weight_conversion** (`str, *optional*`) --
  The path to the initialized adapter, which is obtained after initializing the model with
  PiSSA/CorDA/OLoRA and before performing any training. When `path_initial_model_for_weight_conversion`
  is not None, the difference in adapter before and after fine-tuning is calculated. This difference can
  be represented as the parameters of a standard LoRA adapter. Using this converted adapter does not
  require changes to the base model, thus conveniently allowing the use of multiple PiSSA/CorDA/OLoRA
  adapters with LoRA adapters, and the activation or deactivation of any adapters. Note that this
  conversion is not supported if `rslora` is used in combination with `rank_pattern` or `alpha_pattern`.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments passed along to the `push_to_hub` method.</paramsdesc><paramgroups>0</paramgroups></docstring>

This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the [PeftModel.from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.from_pretrained) class method, and also used by the `PeftModel.push_to_hub()`
method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>peft.PeftModel.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1433</source><parameters>[{"name": "adapter_name", "val": ": str"}]</parameters><paramsdesc>- **adapter_name** (`str`) --
  The name of the adapter to be set as active. The adapter must be loaded first.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the active adapter.

Only one adapter can be active at a time.

Additionally, this function will set the specified adapter to trainable (i.e., requires_grad=True). If this is
not desired, use the following code.

<ExampleCodeBlock anchor="peft.PeftModel.set_adapter.example">

```py
>>> for name, param in model_peft.named_parameters():
...     if ...:  # some check on name (ex. if 'lora' in name)
...         param.requires_grad = False
```

</ExampleCodeBlock>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_requires_grad</name><anchor>peft.PeftModel.set_requires_grad</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1462</source><parameters>[{"name": "adapter_names", "val": ": str | Sequence[str]"}, {"name": "requires_grad", "val": ": bool = True"}]</parameters><paramsdesc>- **adapter_name** (`str` or `Sequence[str]`) --
  The name of the adapter(s) whose gradients should be enabled/disabled.
- **requires_grad** (`bool`, *optional*) --
  Whether to enable (`True`, default) or disable (`False`).</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable or disable gradients on the given adapter(s).

Note: Not supported for prompt learning methods like prompt tuning.




</div></div>

## PeftModelForSequenceClassification[[peft.PeftModelForSequenceClassification]]

A `PeftModel` for sequence classification tasks.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForSequenceClassification</name><anchor>peft.PeftModelForSequenceClassification</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1578</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for sequence classification tasks.



**Attributes**:
- **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/configuration#transformers.PretrainedConfig)) -- The configuration object of the base model.
- **cls_layer_name** (`str`) -- The name of the classification layer.

<ExampleCodeBlock anchor="peft.PeftModelForSequenceClassification.example">

Example:

```py
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "SEQ_CLS",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 768,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 12,
...     "num_layers": 12,
...     "encoder_hidden_size": 768,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
```

</ExampleCodeBlock>


</div>

## PeftModelForTokenClassification[[peft.PeftModelForTokenClassification]]

A `PeftModel` for token classification tasks.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForTokenClassification</name><anchor>peft.PeftModelForTokenClassification</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L2439</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig = None"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for token classification tasks.



**Attributes**:
- **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/configuration#transformers.PretrainedConfig)) -- The configuration object of the base model.
- **cls_layer_name** (`str`) -- The name of the classification layer.

<ExampleCodeBlock anchor="peft.PeftModelForTokenClassification.example">

Example:

```py
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "TOKEN_CLS",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 768,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 12,
...     "num_layers": 12,
...     "encoder_hidden_size": 768,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
```

</ExampleCodeBlock>


</div>

## PeftModelForCausalLM[[peft.PeftModelForCausalLM]]

A `PeftModel` for causal language modeling.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForCausalLM</name><anchor>peft.PeftModelForCausalLM</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L1822</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for causal language modeling.



<ExampleCodeBlock anchor="peft.PeftModelForCausalLM.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config

>>> config = {
...     "peft_type": "PREFIX_TUNING",
...     "task_type": "CAUSAL_LM",
...     "inference_mode": False,
...     "num_virtual_tokens": 20,
...     "token_dim": 1280,
...     "num_transformer_submodules": 1,
...     "num_attention_heads": 20,
...     "num_layers": 36,
...     "encoder_hidden_size": 1280,
...     "prefix_projection": False,
...     "postprocess_past_key_value_function": None,
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
```

</ExampleCodeBlock>


</div>

## PeftModelForSeq2SeqLM[[peft.PeftModelForSeq2SeqLM]]

A `PeftModel` for sequence-to-sequence language modeling.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForSeq2SeqLM</name><anchor>peft.PeftModelForSeq2SeqLM</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L2169</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for sequence-to-sequence language modeling.



<ExampleCodeBlock anchor="peft.PeftModelForSeq2SeqLM.example">

Example:

```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "SEQ_2_SEQ_LM",
...     "inference_mode": False,
...     "r": 8,
...     "target_modules": ["q", "v"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.1,
...     "fan_in_fan_out": False,
...     "enable_lora": None,
...     "bias": "none",
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566
```

</ExampleCodeBlock>


</div>

## PeftModelForQuestionAnswering[[peft.PeftModelForQuestionAnswering]]

A `PeftModel` for question answering.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForQuestionAnswering</name><anchor>peft.PeftModelForQuestionAnswering</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L2662</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for extractive question answering.



**Attributes**:
- **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/configuration#transformers.PretrainedConfig)) -- The configuration object of the base model.
- **cls_layer_name** (`str`) -- The name of the classification layer.

<ExampleCodeBlock anchor="peft.PeftModelForQuestionAnswering.example">

Example:

```py
>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "QUESTION_ANS",
...     "inference_mode": False,
...     "r": 16,
...     "target_modules": ["query", "value"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.05,
...     "fan_in_fan_out": False,
...     "bias": "none",
... }

>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013
```

</ExampleCodeBlock>


</div>

## PeftModelForFeatureExtraction[[peft.PeftModelForFeatureExtraction]]

A `PeftModel` for getting extracting features/embeddings from transformer models.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftModelForFeatureExtraction</name><anchor>peft.PeftModelForFeatureExtraction</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L2906</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- Base transformer model.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) -- Peft config.
- **adapter_name** (`str`,  *optional*) -- The name of the adapter, defaults to `"default"`.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 and bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.</paramsdesc><paramgroups>0</paramgroups></docstring>

Peft model for extracting features/embeddings from transformer models



**Attributes**:
- **config** ([PretrainedConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/configuration#transformers.PretrainedConfig)) -- The configuration object of the base model.

<ExampleCodeBlock anchor="peft.PeftModelForFeatureExtraction.example">

Example:

```py
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config

>>> config = {
...     "peft_type": "LORA",
...     "task_type": "FEATURE_EXTRACTION",
...     "inference_mode": False,
...     "r": 16,
...     "target_modules": ["query", "value"],
...     "lora_alpha": 32,
...     "lora_dropout": 0.05,
...     "fan_in_fan_out": False,
...     "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()
```

</ExampleCodeBlock>


</div>

## PeftMixedModel[[peft.PeftMixedModel]]

A `PeftModel` for mixing different adapter types (e.g. LoRA and LoHa).

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftMixedModel</name><anchor>peft.PeftMixedModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L67</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to be tuned.
- **config** (`PeftConfig`) --
  The config of the model to be tuned. The adapter type must be compatible.
- **adapter_name** (`str`, `optional`, defaults to `"default"`) --
  The name of the first adapter.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups></docstring>

PeftMixedModel for loading mixing different types of adapters for inference.

This class does not support loading/saving, and it shouldn't usually be initialized directly. Instead, use
`get_peft_model` with the argument `mixed=True`.

> [!TIP] > Read the [Mixed adapter types](https://huggingface.co/docs/peft/en/developer_guides/mixed_models) guide
to learn > more about using different adapter types.

<ExampleCodeBlock anchor="peft.PeftMixedModel.example">

Example:

```py
>>> base_model = ...  # load the base model, e.g. from transformers
>>> peft_model = PeftMixedModel.from_pretrained(base_model, path_to_adapter1, "adapter1").eval()
>>> peft_model.load_adapter(path_to_adapter2, "adapter2")
>>> peft_model.set_adapter(["adapter1", "adapter2"])  # activate both adapters
>>> peft_model(data)  # forward pass using both adapters
```

</ExampleCodeBlock>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_adapter</name><anchor>peft.PeftMixedModel.add_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L203</source><parameters>[{"name": "adapter_name", "val": ": str"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_name** (`str`) --
  The name of the adapter to be added.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) --
  The configuration of the adapter to be added.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the process when loading saved
  adapters.

  > [!TIP] > Don't use `low_cpu_mem_usage=True` when creating a new PEFT adapter for training (training
  is untested > and discouraged for PeftMixedModel in general).</paramsdesc><paramgroups>0</paramgroups></docstring>

Add an adapter to the model based on the passed configuration.

This adapter is not trained. To load a trained adapter, check out [PeftModel.load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter).

The name for the new adapter should be unique.

The new adapter is not automatically set as the active adapter. Use [PeftModel.set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.set_adapter) to set the active
adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_adapter</name><anchor>peft.PeftMixedModel.disable_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L192</source><parameters>[]</parameters></docstring>

Disables the adapter module.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>peft.PeftMixedModel.forward</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L180</source><parameters>[{"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>

Forward pass of the model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>peft.PeftMixedModel.from_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L381</source><parameters>[{"name": "model", "val": ": nn.Module"}, {"name": "model_id", "val": ": str | os.PathLike"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "is_trainable", "val": ": bool = False"}, {"name": "config", "val": ": Optional[PeftConfig] = None"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **model** (`nn.Module`) --
  The model to be adapted.
- **model_id** (`str` or `os.PathLike`) --
  The name of the PEFT configuration to use. Can be either:
  - A string, the `model id` of a PEFT configuration hosted inside a model repo on the Hugging Face
    Hub.
  - A path to a directory containing a PEFT configuration file saved using the `save_pretrained`
    method (`./my_peft_config_directory/`).
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter to be loaded. This is useful for loading multiple adapters.
- **is_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and use for
  inference
- **config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig), *optional*) --
  The configuration object to use instead of an automatically loaded configuration. This configuration
  object is mutually exclusive with `model_id` and `kwargs`. This is useful when configuration is already
  loaded before calling `from_pretrained`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device before loading the saved weights. Useful to speed up the
  process.
- **kwargs** -- (`optional`):
  Additional keyword arguments passed along to the specific PEFT configuration class.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a PEFT mixed model from a pretrained model and loaded PEFT weights.

Note that the passed `model` may be modified inplace.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate</name><anchor>peft.PeftMixedModel.generate</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L186</source><parameters>[{"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>

Generate output.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_nb_trainable_parameters</name><anchor>peft.PeftMixedModel.get_nb_trainable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L126</source><parameters>[]</parameters></docstring>

Returns the number of trainable parameters and number of all parameters in the model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_adapter</name><anchor>peft.PeftMixedModel.load_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L332</source><parameters>[{"name": "model_id", "val": ": str"}, {"name": "adapter_name", "val": ": str"}, {"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **adapter_name** (`str`) --
  The name of the adapter to be added.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) --
  The configuration of the adapter to be added.
- **is_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether the adapter should be trainable or not. If `False`, the adapter will be frozen and can only be
  used for inference.
- **torch_device** (`str`, *optional*, defaults to None) --
  The device to load the adapter on. If `None`, the device will be inferred.
- **autocast_adapter_dtype** (`bool`, *optional*, defaults to `True`) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter
  weights using float16 and bfloat16 to float32, as this is typically required for stable training, and
  only affect select PEFT tuners.
- **ephemeral_gpu_offload** (`bool`, *optional*, defaults to `False`) --
  Whether to use ephemeral GPU offloading for partially loaded modules. Defaults to `False`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device before loading the saved weights. Useful to speed up the
  process.
- **kwargs** -- (`optional`):
  Additional arguments to modify the way the adapter is loaded, e.g. the token for Hugging Face Hub.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load a trained adapter into the model.

The name for the new adapter should be unique.

The new adapter is not automatically set as the active adapter. Use [PeftModel.set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.set_adapter) to set the active
adapter.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge_and_unload</name><anchor>peft.PeftMixedModel.merge_and_unload</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L295</source><parameters>[{"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters><paramsdesc>- **progressbar** (`bool`) --
  whether to show a progressbar indicating the unload and merge process
- **safe_merge** (`bool`) --
  whether to activate the safe merging check to check if there is any potential Nan in the adapter
  weights
- **adapter_names** (`List[str]`, *optional*) --
  The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults
  to `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method merges the adapter layers into the base model. This is needed if someone wants to use the base
model as a standalone model.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>print_trainable_parameters</name><anchor>peft.PeftMixedModel.print_trainable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L151</source><parameters>[]</parameters></docstring>

Prints the number of trainable parameters in the model.

Note: print_trainable_parameters() uses get_nb_trainable_parameters() which is different from
num_parameters(only_trainable=True) from huggingface/transformers. get_nb_trainable_parameters() returns
(trainable parameters, all parameters) of the Peft Model which includes modified backbone transformer model.
For techniques like LoRA, the backbone transformer model is modified in place with LoRA modules. However, for
prompt tuning, the backbone transformer model is unmodified. num_parameters(only_trainable=True) returns number
of trainable parameters of the backbone transformer model which can be different.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>peft.PeftMixedModel.set_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L253</source><parameters>[{"name": "adapter_name", "val": ": Union[str, list[str]]"}, {"name": "inference_mode", "val": ": bool = False"}]</parameters><paramsdesc>- **adapter_name** (str, list[str]) --
  The name(s) of the adapter(s) to set as active
- **inference_mode** (bool, optional) --
  Whether the activated adapter should be frozen (i.e. `requires_grad=False`). Default is False.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the active adapter(s) for the model.

Note that the order in which the adapters are applied during the forward pass may not be the same as the order
in which they are passed to this function. Instead, the order during the forward pass is determined by the
order in which the adapters were loaded into the model. The active adapters only determine which adapters are
active during the forward pass, but not the order in which they are applied.

Additionally, this function will set the specified adapter to trainable (i.e., requires_grad=True) unless
inference_mode is True.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload</name><anchor>peft.PeftMixedModel.unload</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mixed_model.py#L312</source><parameters>[{"name": "*args", "val": ": Any"}, {"name": "**kwargs", "val": ": Any"}]</parameters></docstring>

Gets back the base model by removing all the adapter modules without merging. This gives back the original base
model.


</div></div>

## Utilities[[peft.cast_mixed_precision_params]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.cast_mixed_precision_params</name><anchor>peft.cast_mixed_precision_params</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/other.py#L1335</source><parameters>[{"name": "model", "val": ""}, {"name": "dtype", "val": ""}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) --
  The model to cast the non-trainable parameters of.
- **dtype** (`torch.dtype`) --
  The dtype to cast the non-trainable parameters to. The `dtype` can be `torch.float16` or</paramsdesc><paramgroups>0</paramgroups></docstring>

Cast all non-trainable parameters of the model to the given `dtype`. The `dtype` can be `torch.float16` or
`torch.bfloat16` as per the mixed-precision training you are performing. The trainable parameters are cast to full
precision. This is meant to reduce the GPU memory usage when using PEFT methods by using half-precision dtype for
non-trainable parameters. Having the trainable parameters in full-precision preserves training stability when using
automatic mixed-precision training.



`torch.bfloat16` as per the mixed-precision training you are performing.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_peft_model</name><anchor>peft.get_peft_model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mapping_func.py#L31</source><parameters>[{"name": "model", "val": ": PreTrainedModel"}, {"name": "peft_config", "val": ": PeftConfig"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "mixed", "val": ": bool = False"}, {"name": "autocast_adapter_dtype", "val": ": bool = True"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}]</parameters><paramsdesc>- **model** ([transformers.PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) --
  Model to be wrapped.
- **peft_config** ([PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig)) --
  Configuration object containing the parameters of the Peft model.
- **adapter_name** (`str`, `optional`, defaults to `"default"`) --
  The name of the adapter to be injected, if not provided, the default adapter name is used ("default").
- **mixed** (`bool`, `optional`, defaults to `False`) --
  Whether to allow mixing different (compatible) adapter types.
- **autocast_adapter_dtype** (`bool`, *optional*) --
  Whether to autocast the adapter dtype. Defaults to `True`. Right now, this will only cast adapter weights
  using float16 or bfloat16 to float32, as this is typically required for stable training, and only affect
  select PEFT tuners.
- **revision** (`str`, `optional`, defaults to `main`) --
  The revision of the base model. If this isn't set, the saved peft model will load the `main` revision for
  the base model
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process. Leave this setting as
  False if you intend on training the model, unless the adapter weights will be replaced by different weights
  before training starts.</paramsdesc><paramgroups>0</paramgroups></docstring>

Returns a Peft model object from a model and a config, where the model will be modified in-place.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.inject_adapter_in_model</name><anchor>peft.inject_adapter_in_model</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/mapping.py#L47</source><parameters>[{"name": "peft_config", "val": ": PeftConfig"}, {"name": "model", "val": ": torch.nn.Module"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **peft_config** (`PeftConfig`) --
  Configuration object containing the parameters of the PEFT model.
- **model** (`torch.nn.Module`) --
  The input model where the adapter will be injected.
- **adapter_name** (`str`, `optional`, defaults to `"default"`) --
  The name of the adapter to be injected, if not provided, the default adapter name is used ("default").
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.
- **state_dict** (`dict`, *optional*, defaults to `None`) --
  If a `state_dict` is passed here, the adapters will be injected based on the entries of the state_dict.
  This can be useful when the exact `target_modules` of the PEFT method is unknown, for instance because the
  checkpoint was created without meta data. Note that the values from the `state_dict` are not used, only the
  keys are used to determine the correct layers that should be adapted.</paramsdesc><paramgroups>0</paramgroups></docstring>

Create PEFT layers and inject them into the model in-place.

Currently the API does not support prompt learning methods and adaption prompt.

This function is similar to [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) but it does not return a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) instance. Instead, it returns
the original, mutated instance of the passed model.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_peft_model_state_dict</name><anchor>peft.get_peft_model_state_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/save_and_load.py#L57</source><parameters>[{"name": "model", "val": ""}, {"name": "state_dict", "val": " = None"}, {"name": "adapter_name", "val": " = 'default'"}, {"name": "unwrap_compiled", "val": " = False"}, {"name": "save_embedding_layers", "val": " = 'auto'"}]</parameters><paramsdesc>- **model** ([PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel)) -- The Peft model. When using torch.nn.DistributedDataParallel, DeepSpeed or FSDP,
  the model should be the underlying model/unwrapped model (i.e. model.module).
- **state_dict** (`dict`, *optional*, defaults to `None`) --
  The state dict of the model. If not provided, the state dict of the passed model will be used.
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter whose state dict should be returned.
- **unwrap_compiled** (`bool`, *optional*, defaults to `False`) --
  Whether to unwrap the model if torch.compile was used.
- **save_embedding_layers** (`Union[bool, str]`, , *optional*, defaults to `auto`) --
  If `True`, save the embedding layers in addition to adapter weights. If `auto`, checks the common embedding
  layers `peft.utils.other.EMBEDDING_LAYER_NAMES` in config's `target_modules` when available. Based on it
  sets the boolean flag. This only works for 🤗 transformers models.</paramsdesc><paramgroups>0</paramgroups></docstring>

Get the state dict of the given adapter of the PEFT model.

This only includes the PEFT parameters, not the parameters of the base model. Thus the returned `state_dict` is
generally small compared to the full model size. To retrieve the full `state_dict`, just call `model.state_dict()`.

Note that the adapter name is removed from the `state_dict`, as this is just an arbitrary name that can be changed
when loading the adapter. So e.g. if the adapter name is `'default'` and the original key is
`'model.q_proj.lora_A.default.weight'`, the returned key will be `'model.q_proj.lora_A.weight'`. Use this function
in conjunction with [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict) to take care of the adapter name when loading weights.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.prepare_model_for_kbit_training</name><anchor>peft.prepare_model_for_kbit_training</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/other.py#L130</source><parameters>[{"name": "model", "val": ""}, {"name": "use_gradient_checkpointing", "val": " = True"}, {"name": "gradient_checkpointing_kwargs", "val": " = None"}]</parameters><paramsdesc>- **model** (`transformers.PreTrainedModel`) --
  The loaded model from `transformers`
- **use_gradient_checkpointing** (`bool`, *optional*, defaults to `True`) --
  If True, use gradient checkpointing to save memory at the expense of slower backward pass.
- **gradient_checkpointing_kwargs** (`dict`, *optional*, defaults to `None`) --
  Keyword arguments to pass to the gradient checkpointing function, please refer to the documentation of
  `torch.utils.checkpoint.checkpoint` for more details about the arguments that you can pass to that method.
  Note this is only available in the latest transformers versions (> 4.34.1).</paramsdesc><paramgroups>0</paramgroups></docstring>

Note this method only works for `transformers` models.

This method wraps the entire protocol for preparing a model before running a training. This includes:
1- Cast the layernorm in fp32 2- making output embedding layer require grads 3- Add the upcasting of the lm
head to fp32 4- Freezing the base model layers to ensure they are not updated during training





</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_layer_status</name><anchor>peft.get_layer_status</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L3023</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}]</parameters><paramsdesc>- **model** ([Union[`~PeftModel`, `~transformers.PreTrainedModel`, `nn.Module`]]) --
  The model to get the adapter layer status from.</paramsdesc><paramgroups>0</paramgroups><rettype>list`peft.peft_model.TunerLayerStatus`</rettype><retdesc>A list of dataclasses, each containing the status of the corresponding adapter layer.</retdesc></docstring>
Get the status of each adapter layer in the model.

This function returns a list of `TunerLayerStatus` dataclass instances, each of which contains the following
attributes:

- `name` (`str`):
  The name of the adapter layer, e.g. `model.encoder.block.0.layer.0.SelfAttention.q`.
- `module_type` (`str`):
  The type of the adapter layer, e.g. `lora.Linear`.
- `enabled` (`bool`):
  Whether the adapter layer is enabled.
- `active_adapters` (`list[str]`):
  The names of the active adapters, if any, e.g. `["default"]`.
- `merged_adapters` (`list[str]`):
  The names of the merged adapters, if any, e.g. `["default"]`.
- requires_grad : dict[str, bool | Literal["irregular"]]
  The requires_grad status of the parameters for each adapter module. Ideally, it should be either `True` or
  `False`. If the requires_grad status is not consistent across all parameters, the value will be set to
  `"irregular"`.
- `available_adapters` (`list[str]`):
  The names of the available adapters, e.g. `["default"]`.
- `devices` (`dict[str, list[str]]`):
  The devices where the parameters of the given adapter are stored, e.g. `["cuda"]`.








</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_model_status</name><anchor>peft.get_model_status</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/peft_model.py#L3150</source><parameters>[{"name": "model", "val": ": torch.nn.Module"}]</parameters><paramsdesc>- **model** ([Union[`~PeftModel`, `~transformers.PreTrainedModel`, `nn.Module`]]) --
  The model to get the adapter layer status from.</paramsdesc><paramgroups>0</paramgroups><rettype>`peft.peft_model.TunerModelStatus`</rettype><retdesc>A dataclass containing the status of the model.</retdesc></docstring>
Get the status of tuners of the model.

This function returns a `TunerModelStatus` dataclass instance, which contains the following attributes:

- `base_model_type` (`str`):
  The type of the base model, e.g. `T5Model`.
- `adapter_model_type` (`str`):
  The type of the adapter model, e.g. `LoraModel`.
- `peft_types` (`dict[str, str]`):
  The mapping of adapter name to adapter type, e.g. `{"default": "LORA"}`.
- `trainable_params` (`int`):
  The number of trainable parameters in the model.
- `total_params` (`int`):
  The total number of parameters in the model.
- `num_adapter_layers` (`int`):
  The number of adapter layers in the model.
- `enabled` (`bool`, `Literal["irregular"]`):
  Whether all adapter layers are enabled. If some are enabled and some are not, this will be `"irregular"`. This
  means that your model is in an inconsistent state and might not work as expected.
- `active_adapters` (`list[str]`, `Literal["irregular"]`):
  The names of the active adapters. If the active adapters are not consistent across all layers, this will be
  `"irregular"`, which means that your model is in an inconsistent state and might not work as expected.
- `merged_adapters` (`list[str]`, `Literal["irregular"]`):
  The names of the merged adapters. If the merged adapters are not consistent across all layers, this will be
  `"irregular"`, which means that your model is in an inconsistent state and might not work as expected.
- `requires_grad` (`dict[str, bool | Literal["irregular"]]`):
  Whether for the given adapter, all adapter layers have `requires_grad` set to `True` or `False`. If there is a
  mix, this will be set to `"irregular"`, which means that your model is in an inconsistent state and might not
  work as expected.
- `available_adapters` (`list[str]`):
  The names of the available adapters, e.g. `["default"]`.
- `devices` (`dict[str, list[str]]`):
  The devices where the parameters of the given adapter are stored, e.g. `["cuda"]`.








</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/peft_model.md" />

### LoRA
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/lora.md

# LoRA

Low-Rank Adaptation ([LoRA](https://huggingface.co/papers/2309.15223)) is a PEFT method that decomposes a large matrix into two smaller low-rank matrices in the attention layers. This drastically reduces the number of parameters that need to be fine-tuned.

The abstract from the paper is:

*We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.*.

## LoraConfig[[peft.LoraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoraConfig</name><anchor>peft.LoraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/config.py#L250</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 8"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "lora_alpha", "val": ": int = 8"}, {"name": "lora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": Literal['none', 'all', 'lora_only'] = 'none'"}, {"name": "use_rslora", "val": ": bool = False"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_lora_weights", "val": ": bool | Literal['gaussian', 'eva', 'olora', 'pissa', 'pissa_niter_[number of iters]', 'corda', 'loftq', 'orthogonal'] = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "rank_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "alpha_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "megatron_config", "val": ": Optional[dict] = None"}, {"name": "megatron_core", "val": ": Optional[str] = 'megatron.core'"}, {"name": "trainable_token_indices", "val": ": Optional[Union[list[int], dict[str, list[int]]]] = None"}, {"name": "loftq_config", "val": ": Union[LoftQConfig, dict] = <factory>"}, {"name": "eva_config", "val": ": Optional[EvaConfig] = None"}, {"name": "corda_config", "val": ": Optional[CordaConfig] = None"}, {"name": "use_dora", "val": ": bool = False"}, {"name": "alora_invocation_tokens", "val": ": Optional[list[int]] = None"}, {"name": "use_qalora", "val": ": bool = False"}, {"name": "qalora_group_size", "val": ": int = 16"}, {"name": "layer_replication", "val": ": Optional[list[tuple[int, int]]] = None"}, {"name": "runtime_config", "val": ": LoraRuntimeConfig = <factory>"}, {"name": "lora_bias", "val": ": bool = False"}, {"name": "target_parameters", "val": ": Optional[list[str]] = None"}, {"name": "arrow_config", "val": ": Optional[ArrowConfig] = None"}, {"name": "ensure_weight_tying", "val": ": bool = False"}]</parameters><paramsdesc>- **r** (`int`) --
  Lora attention dimension (the "rank").
- **target_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen (if
  the model is a PreTrainedModel, the output layer excluded). If this is not specified, modules will be
  chosen according to the model architecture. If the architecture is not known, an error will be raised -- in
  this case, you should specify the target modules manually. To avoid targeting any modules (because you want
  to apply `target_parameters`), set `target_modules=[]`.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **lora_alpha** (`int`) --
  The alpha parameter for Lora scaling.
- **lora_dropout** (`float`) --
  The dropout probability for Lora layers.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **bias** (`str`) --
  Bias type for LoRA. Can be 'none', 'all' or 'lora_only'. If 'all' or 'lora_only', the corresponding biases
  will be updated during training. Be aware that this means that, even when disabling the adapters, the model
  will not produce the same output as the base model would have without adaptation.
- **use_rslora** (`bool`) --
  When set to True, uses [Rank-Stabilized LoRA](https://huggingface.co/papers/2312.03732) which sets the
  adapter scaling factor to `lora_alpha/math.sqrt(r)`, since it was proven to work better. Otherwise, it will
  use the original default value of `lora_alpha/r`.
- **modules_to_save** (`List[str]`) --
  List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint.
- **init_lora_weights** (`bool` | `Literal["gaussian", "eva", "olora", "pissa", "pissa_niter_[number of iters]", "corda", "loftq", "orthogonal"]`) --
  How to initialize the weights of the adapter layers. Passing True (default) results in the default
  initialization from the reference implementation from Microsoft, with the LoRA B weight being set to 0.
  This means that without further training, the LoRA adapter will be a no-op. Setting the initialization to
  False leads to random initialization of LoRA A and B, meaning that LoRA is not a no-op before training;
  this setting is intended for debugging purposes. Passing 'gaussian' results in Gaussian initialization
  scaled by the LoRA rank for linear and layers. Pass `'loftq'` to use LoftQ initialization. Passing `'eva'`
  results in a data-driven initialization of <a href='https://huggingface.co/papers/2410.07170' >Explained
  Variance Adaptation</a>. EVA initializes LoRA based on the SVD of layer input activations and achieves SOTA
  performance due to its ability to adapt to the finetuning data. Pass `'olora'` to use OLoRA initialization.
  Passing `'pissa'` results in the initialization of <a href='https://huggingface.co/papers/2404.02948'
  >Principal Singular values and Singular vectors Adaptation (PiSSA)</a>, which converges more rapidly than
  LoRA and ultimately achieves superior performance. Moreover, PiSSA reduces the quantization error compared
  to QLoRA, leading to further enhancements. Passing `'pissa_niter_[number of iters]'` initiates
  Fast-SVD-based PiSSA initialization, where `[number of iters]` indicates the number of subspace iterations
  to perform FSVD, and must be a nonnegative integer. When `[number of iters]` is set to 16, it can complete
  the initialization of a 7B model within seconds, and the training effect is approximately equivalent to
  using SVD. Passing `'corda'` results in the initialization of <a
  href='https://huggingface.co/papers/2406.05223' >Context-Oriented Decomposition Adaptation</a>, which
  converges even more rapidly than PiSSA in Instruction-Previewed Mode, and preserves world knowledge better
  than LoRA in Knowledge-Preserved Mode. Passing `"orthogonal"` results in LoRA A and B being intialized
  orthogonally; in this, it resembles `"olora"`, but the base weights are left untouched (requires `r` to be
  even, only supported for linear layers for now).
- **layers_to_transform** (`Union[List[int], int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.
- **rank_pattern** (`dict`) --
  The mapping from layer names or regexp expression to ranks which are different from the default rank
  specified by `r`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **alpha_pattern** (`dict`) --
  The mapping from layer names or regexp expression to alphas which are different from the default alpha
  specified by `lora_alpha`. For example, `{'^model.decoder.layers.0.encoder_attn.k_proj': 16}`.
- **megatron_config** (`Optional[dict]`) --
  The TransformerConfig arguments for Megatron. It is used to create LoRA's parallel linear layer. You can
  get it like this, `core_transformer_config_from_args(get_args())`, these two functions being from Megatron.
  The arguments will be used to initialize the TransformerConfig of Megatron. You need to specify this
  parameter when you want to apply LoRA to the ColumnParallelLinear and RowParallelLinear layers of megatron.
- **megatron_core** (`Optional[str]`) --
  The core module from Megatron to use, defaults to `"megatron.core"`.
- **trainable_token_indices** (`Optional[Union[List[int], dict[str, List[int]]]]`) --
  Lets you specify which token indices to selectively fine-tune without requiring to re-train the whole
  embedding matrix using the `peft.TrainableTokensModel` method. You can specify token indices in two ways.
  Either you specify a list of indices which will then target the model's input embedding layer (or, if not
  found, `embed_tokens`). Alternatively, you can specify a dictionary where the key is the name of the
  embedding module and the values are the list of token indices, e.g. `{'embed_tokens': [0, 1, ...]}`. Note
  that training with FSDP requires `use_orig_params=True` to avoid issues with non-uniform `requires_grad`.
- **loftq_config** (`Optional[LoftQConfig]`) --
  The configuration of LoftQ. If this is not None, then LoftQ will be used to quantize the backbone weights
  and initialize Lora layers. Also pass `init_lora_weights='loftq'`. Note that you should not pass a
  quantized model in this case, as LoftQ will quantize the model itself.
- **eva_config** (`Optional[EvaConfig]`) --
  The configuration of EVA. At a minimum the dataset argument needs to be set (use the same dataset as for
  finetuning).
- **corda_config** (`Optional[CordaConfig]`) --
  The configuration of CorDA. If this is not None, then CorDA will be used to build the adapter layers. Also
  pass `init_lora_weights='corda'`.
- **use_dora** (`bool`) --
  Enable 'Weight-Decomposed Low-Rank Adaptation' (DoRA). This technique decomposes the updates of the weights
  into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is
  handled by a separate learnable parameter. This can improve the performance of LoRA especially at low
  ranks. Right now, DoRA only supports linear and Conv2D layers. DoRA introduces a bigger overhead than pure
  LoRA, so it is recommended to merge weights for inference. For more information, see
  https://huggingface.co/papers/2402.09353.
- **alora_invocation_tokens** (`List[int]`) --
  If not None, enable <a href='https://huggingface.co/papers/2504.12397'>'Activated LoRA' (aLoRA)</a>, with
  alora_invocation_tokens being the tokenized invocation string for the adapter (must be present in all model
  input strings). This technique selectively activates the adapter weights only on tokens during and after
  the alora_invocation_tokens. When used in a CausalLM, this means that the KV cache prior to invocation is
  interchangeable with that of the base model (and other aLoRA adapters operating this way). As a result, in
  inference pipelines involving switching between base model inference and adapter inference (e.g. agentic
  pipelines, see paper for examples), significant savings are realized (relative to LoRA) by saving prefill
  operations. Overall adapter inference speedups of an order of magnitude or more can occur on vLLM,
  depending on the length of the shared context. Note that merging is not possible due to the selective
  application of the weights.
- **layer_replication** (`List[Tuple[int, int]]`) --
  Build a new stack of layers by stacking the original model layers according to the ranges specified. This
  allows expanding (or shrinking) the model without duplicating the base model weights. The new layers will
  all have separate LoRA adapters attached to them.
- **runtime_config** (`LoraRuntimeConfig`) --
  Runtime configurations (which are not saved or restored).
- **lora_bias** (`bool`) --
  Defaults to `False`. Whether to enable the bias term for the LoRA B parameter. Typically, this should be
  disabled. The main use case for this is when the LoRA weights were extracted from fully fine-tuned
  parameters so the bias of those parameters can be taken into account.
- **target_parameters** (`List[str]`, *optional*) --
  List of parameter names or regex expression of the parameter names to replace with LoRA. This argument
  behaves similarly to `target_modules`, except that the parameter name should be passed. Generally, you
  should use `target_modules` to target the module (e.g. `nn.Linear`). However, in some circumstances, this
  is not possible. E.g., in many mixture of expert (MoE) layers in HF Transformers, instead of using
  `nn.Linear`, an `nn.Parameter` is used. PEFT normally overwrites the `forward` method for LoRA, but for
  `nn.Parameter`, there is none. Therefore, to apply LoRA to that parameter, it needs to be targeted with
  `target_parameters`. As an example, for Llama4, you can pass:
  `target_parameters=['feed_forward.experts.gate_up_proj', 'feed_forward.experts.down_proj]`. Passing a
  string for regex matching is not implemented yet.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [LoraModel](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraModel).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>peft.LoraConfig.to_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/config.py#L678</source><parameters>[]</parameters></docstring>

Returns the configuration for your adapter model as a dictionary. Removes runtime configurations.


</div></div>

## LoraModel[[peft.LoraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LoraModel</name><anchor>peft.LoraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/model.py#L68</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig)) -- The configuration of the Lora model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The Lora model.</retdesc></docstring>

Creates Low Rank Adapter (LoRA) model from a pretrained transformers model.

The method is described in detail in https://huggingface.co/papers/2106.09685.







<ExampleCodeBlock anchor="peft.LoraModel.example">

Example:

```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import LoraModel, LoraConfig

>>> config = LoraConfig(
...     task_type="SEQ_2_SEQ_LM",
...     r=8,
...     lora_alpha=32,
...     target_modules=["q", "v"],
...     lora_dropout=0.01,
... )

>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> lora_model = LoraModel(model, config, "default")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="peft.LoraModel.example-2">

```py
>>> import torch
>>> import transformers
>>> from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training

>>> rank = ...
>>> target_modules = ["q_proj", "k_proj", "v_proj", "out_proj", "fc_in", "fc_out", "wte"]
>>> config = LoraConfig(
...     r=4, lora_alpha=16, target_modules=target_modules, lora_dropout=0.1, bias="none", task_type="CAUSAL_LM"
... )
>>> quantization_config = transformers.BitsAndBytesConfig(load_in_8bit=True)

>>> tokenizer = transformers.AutoTokenizer.from_pretrained(
...     "kakaobrain/kogpt",
...     revision="KoGPT6B-ryan1.5b-float16",  # or float32 version: revision=KoGPT6B-ryan1.5b
...     bos_token="[BOS]",
...     eos_token="[EOS]",
...     unk_token="[UNK]",
...     pad_token="[PAD]",
...     mask_token="[MASK]",
... )
>>> model = transformers.GPTJForCausalLM.from_pretrained(
...     "kakaobrain/kogpt",
...     revision="KoGPT6B-ryan1.5b-float16",  # or float32 version: revision=KoGPT6B-ryan1.5b
...     pad_token_id=tokenizer.eos_token_id,
...     use_cache=False,
...     device_map={"": rank},
...     torch_dtype=torch.float16,
...     quantization_config=quantization_config,
... )
>>> model = prepare_model_for_kbit_training(model)
>>> lora_model = get_peft_model(model, config)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig)): The configuration of the Lora model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_weighted_adapter</name><anchor>peft.LoraModel.add_weighted_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/model.py#L519</source><parameters>[{"name": "adapters", "val": ": list[str]"}, {"name": "weights", "val": ": list[float]"}, {"name": "adapter_name", "val": ": str"}, {"name": "combination_type", "val": ": str = 'svd'"}, {"name": "svd_rank", "val": ": int | None = None"}, {"name": "svd_clamp", "val": ": int | None = None"}, {"name": "svd_full_matrices", "val": ": bool = True"}, {"name": "svd_driver", "val": ": str | None = None"}, {"name": "density", "val": ": float | None = None"}, {"name": "majority_sign_method", "val": ": Literal['total', 'frequency'] = 'total'"}]</parameters><paramsdesc>- **adapters** (`list`) --
  List of adapter names to be merged.
- **weights** (`list`) --
  List of weights for each adapter. Weights can be positive or negative, allowing for both addition and
  subtraction of adapter effects.
- **adapter_name** (`str`) --
  Name of the new adapter.
- **combination_type** (`str`) --
  The merging type can be one of [`svd`, `linear`, `cat`, `ties`, `ties_svd`, `dare_ties`, `dare_linear`,
  `dare_ties_svd`, `dare_linear_svd`, `magnitude_prune`, `magnitude_prune_svd`]. When using the `cat`
  combination_type, the rank of the resulting adapter is equal to the sum of all adapters ranks (the
  mixed adapter may be too big and result in OOM errors).
- **svd_rank** (`int`, *optional*) --
  Rank of output adapter for svd. If None provided, will use max rank of merging adapters.
- **svd_clamp** (`float`, *optional*) --
  A quantile threshold for clamping SVD decomposition output. If None is provided, do not perform
  clamping. Defaults to None.
- **svd_full_matrices** (`bool`, *optional*) --
  Controls whether to compute the full or reduced SVD, and consequently, the shape of the returned
  tensors U and Vh. Defaults to True.
- **svd_driver** (`str`, *optional*) --
  Name of the cuSOLVER method to be used. This keyword argument only works when merging on CUDA. Can be
  one of [None, `gesvd`, `gesvdj`, `gesvda`]. For more info please refer to `torch.linalg.svd`
  documentation. Defaults to None.
- **density** (`float`, *optional*) --
  Value between 0 and 1. 0 means all values are pruned and 1 means no values are pruned. Should be used
  with [`ties`, `ties_svd`, `dare_ties`, `dare_linear`, `dare_ties_svd`, `dare_linear_svd`,
  `magnintude_prune`, `magnitude_prune_svd`]
- **majority_sign_method** (`str`) --
  The method, should be one of ["total", "frequency"], to use to get the magnitude of the sign values.
  Should be used with [`ties`, `ties_svd`, `dare_ties`, `dare_ties_svd`]</paramsdesc><paramgroups>0</paramgroups></docstring>

This method adds a new adapter by merging the given adapters with the given weights.

When using the `cat` combination_type you should be aware that rank of the resulting adapter will be equal to
the sum of all adapters ranks. So it's possible that the mixed adapter may become too big and result in OOM
errors.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>subtract_mutated_init</name><anchor>peft.LoraModel.subtract_mutated_init</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/model.py#L771</source><parameters>[{"name": "output_state_dict", "val": ": dict[str, torch.Tensor]"}, {"name": "adapter_name", "val": ": str"}, {"name": "kwargs", "val": " = None"}]</parameters></docstring>

This function can calculate the updates of the PiSSA/CorDA/OLoRA by comparing the parameters of the
PiSSA/CorDA/OLoRA adapter in `output_state_dict` with the initial values of PiSSA/CorDA/OLoRA in
`adapter_name`, thus converting PiSSA/CorDA/OLoRA to LoRA.


</div></div>

## Utility

### ArrowConfig[[peft.ArrowConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.ArrowConfig</name><anchor>peft.ArrowConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/config.py#L73</source><parameters>[{"name": "top_k", "val": ": int = 3"}, {"name": "router_temperature", "val": ": float = 1.0"}, {"name": "use_gks", "val": ": bool = False"}, {"name": "rng_seed", "val": ": Optional[int] = None"}]</parameters></docstring>

This is the sub-configuration class to store the configuration for Arrow and GenKnowSub algorithm. Arrow is a
routing algorithm to combine the trained LoRA modules to solve new tasks, proposed in
'https://arxiv.org/pdf/2405.11157'. GenKnowSub is a refinement on the trained modules before being combined via
Arrow, introduced in 'https://aclanthology.org/2025.acl-short.54/'


</div>

### LoftQ[[peft.replace_lora_weights_loftq]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.replace_lora_weights_loftq</name><anchor>peft.replace_lora_weights_loftq</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/utils/loftq_utils.py#L330</source><parameters>[{"name": "peft_model", "val": ""}, {"name": "model_path", "val": ": Optional[str] = None"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "callback", "val": ": Optional[Callable[[torch.nn.Module, str], bool]] = None"}]</parameters><paramsdesc>- **peft_model** (`PeftModel`) --
  The model to replace the weights of. Must be a quantized PEFT model with LoRA layers.
- **model_path** (`Optional[str]`) --
  The path to the model safetensors file. If the model is a Hugging Face model, this will be inferred from
  the model's config. Otherwise, it must be provided.
- **adapter_name** (`str`) --
  The name of the adapter to replace the weights of. The default adapter name is "default".
- **callback** (`Optional[Callable[[PeftModel, str], bool]]`) --
  A callback function that will be called after each module is replaced. The callback function should take
  the model and the name of the current module as input and return a boolean indicating whether the
  replacement should be kept. If the callback returns False, the replacement will be rolled back. This can be
  very useful to confirm that the LoftQ initialization actually decreases the quantization error of the
  model. As an example, this callback could generate logits for given input and compare it with the logits
  from the original, non-quanitzed model with the same input, and only return `True` if there is an
  improvement. As this is a greedy optimization, it's possible that calling this function multiple times
  yields incremental improvements.</paramsdesc><paramgroups>0</paramgroups></docstring>

Replace the LoRA weights of a model quantized with bitsandbytes, using the LoftQ technique.

The replacement is done on the fly by loading in the non-quantized weights from a locally stored safetensors model
file and initializing the LoRA weights such that the quantization error between the original and quantized weights
is minimized.

As lazy loading is not possible with pickle, normal PyTorch checkpoint files cannot be supported.

Depending on the model size, calling this function may take some time to finish.




</div>

### Eva

#### EvaConfig[[peft.EvaConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.EvaConfig</name><anchor>peft.EvaConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/config.py#L123</source><parameters>[{"name": "rho", "val": ": float = 2.0"}, {"name": "tau", "val": ": float = 0.99"}, {"name": "use_label_mask", "val": ": bool = True"}, {"name": "label_mask_value", "val": ": int = -100"}, {"name": "whiten", "val": ": bool = False"}, {"name": "adjust_scaling_factors", "val": ": bool = True"}]</parameters><paramsdesc>- **rho** (`float`) --
  Rho value for EVA redistribution (>= 1.0). The maximum rank for a layer is lora_r * rho. Default is 2.0,
  meaning the maximum rank allowed for a layer is 2r. Increasing rho will allow for a higher degree of
  redistribution of ranks across layers. Some pre-trained models might be more sensitive to a rank
  redistribution. It can therefore be beneficial to try rho=1.0 (no redistribution) if the performance is
  lower than expected.
- **tau** (`float`) --
  Cosine similarity threshold for early stopping. Compares the cosine similarity of right-singular vectors
  between two consecutive SVD steps. If the cosine similarity is above this threshold, the SVD iteration is
  stopped. Default is 0.99.
- **use_label_mask** (`bool`) --
  Use label mask for EVA initialization. This means that positions where labels=label_mask_value are ignored
  for the SVD computation. Setting use_label_mask=True is preferred in most cases and can be especially
  beneficial for multi-turn conversations. The default value is True. Filtering out items based on the label
  mask can sometimes lead to a small batch size and as a result instabilities in the SVD computation. For
  cases where a large share of batch items would be filtered out, set use_label_mask=False.
- **label_mask_value** (`int`) --
  If use_label_mask=True the value to look for to mask out ignored tokens. Default is -100.
- **whiten** (`bool`) -- Apply whitening to singular vectors. Default is False.
  Whitening has been shown to be beneficial for EVA in the vision domain.
- **adjust_scaling_factors** (`bool`) --
  Adjust LoRA scaling factors after the rank redistribution. Setting this to True means the scaling factors
  are adjusted so that all LoRA gradients have the same scale regardless of their rank. Default is True.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the sub-configuration class to store the configuration for a data-driven initialization via EVA. EVA was
introduced in <a href='https://huggingface.co/papers/2410.07170'>Explained Variance Adaptation</a>.




</div>

#### initialize_lora_eva_weights[[peft.initialize_lora_eva_weights]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.initialize_lora_eva_weights</name><anchor>peft.initialize_lora_eva_weights</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/eva.py#L659</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "dataloader", "val": ": typing.Optional[collections.abc.Iterable] = None"}, {"name": "eva_state_dict", "val": ": typing.Optional[dict] = None"}, {"name": "forward_fn", "val": ": typing.Optional[<built-in function callable>] = <function forward_fn_dict at 0x7fd1ec61e9e0>"}, {"name": "prepare_model_inputs_fn", "val": ": typing.Optional[<built-in function callable>] = <function prepare_model_inputs_fn_language_modeling at 0x7fd1ec61e8c0>"}, {"name": "prepare_layer_inputs_fn", "val": ": typing.Union[<built-in function callable>, dict[str, callable], NoneType] = <function prepare_layer_inputs_fn_language_modeling at 0x7fd1ec61e950>"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "gather_distributed_inputs", "val": ": bool = True"}, {"name": "show_progress_bar", "val": ": bool = True"}]</parameters><paramsdesc>- **model** (PeftModel) -- The peft model to compute the SVD for.
- **dataloader** (Optional[Iterable]) --
  The dataloader to use for the forward pass. If None, eva_state_dict needs to be provided.
- **eva_state_dict** (Optional[dict]) --
  The state_dict to load into the model. If None, a dataloader needs to be provided and the state_dict will
  be computed using `get_eva_state_dict`.
- **forward_fn** (callable) --
  The forward function to use for the forward pass. Takes two arguments: `model` and `inputs`. Default
  behavior is `return model(**inputs)`
- **prepare_model_inputs_fn** (Optional[callable]) --
  This function receives the model inputs and the peft_config and passes the output to
  `prepare_layer_inputs_fn`. Can be used to modify the input to the SVD computation based on the original
  model inputs. For example for language modeling the attention mask is used to determine which indices are
  padding tokens and should not be used for SVD. Any function defined here expects two arguments:
  `model_input` and `peft_config`. `peft.tuners.lora.eva.prepare_model_inputs_fn_language_modeling` is used
  by default.
- **prepare_layer_inputs_fn** (Union[callable, Dict[str, callable], None]) --
  This function receives the layer inputs, the model inputs (potentially modified by
  `prepare_model_inputs_fn`) and the name of the layer and returns the inputs that should be used for SVD for
  that particular layer. Any custom function defined here expects three arguments: `layer_input`,
  `model_input`, and `layer_name` and should return a 2d tensor. The default logic can be found in
  peft.tuners.lora.eva.prepare_layer_inputs_fn_language_modeling and works for language modeling. In this
  case model_inputs is the mask used to determine which indices should be used for SVD (created by
  `prepare_model_inputs_fn_language_modeling`).
- **adapter_name** (str) -- The name of the adapter to initialize the weights for.
- **gather_distributed_inputs** (bool) --
  Whether to gather the layer inputs from all ranks. Default is True meaning in a distributed setting the
  layer inputs will be gathered from all ranks for the SVD computation. For non-distributed settings this
  argument is ignored. Set to False if you are using a non-distributed dataloader in a distributed setting.
- **show_progress_bar** (bool) -- Whether to show a progress bar. Default is True.</paramsdesc><paramgroups>0</paramgroups><rettype>model (torch.nn.Module)</rettype><retdesc>The model with the initialized LoRA weights.</retdesc></docstring>

Initialize the weights of the LoRA layers using the EVA method.

This function initializes the weights of the LoRA layers using the EVA method. It computes the SVD for each adapter
layer and updates the weights accordingly.








</div>

#### get_eva_state_dict[[peft.get_eva_state_dict]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.get_eva_state_dict</name><anchor>peft.get_eva_state_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lora/eva.py#L561</source><parameters>[{"name": "model", "val": ": Module"}, {"name": "dataloader", "val": ": Iterable"}, {"name": "peft_config", "val": ": typing.Optional[peft.tuners.lora.config.LoraConfig] = None"}, {"name": "forward_fn", "val": ": typing.Optional[<built-in function callable>] = <function forward_fn_dict at 0x7fd1ec61e9e0>"}, {"name": "prepare_model_inputs_fn", "val": ": typing.Optional[<built-in function callable>] = <function prepare_model_inputs_fn_language_modeling at 0x7fd1ec61e8c0>"}, {"name": "prepare_layer_inputs_fn", "val": ": typing.Union[<built-in function callable>, dict[str, callable], NoneType] = <function prepare_layer_inputs_fn_language_modeling at 0x7fd1ec61e950>"}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "gather_distributed_inputs", "val": ": bool = True"}, {"name": "show_progress_bar", "val": ": bool = True"}]</parameters><paramsdesc>- **model** (torch.nn.Module) -- The model to compute the SVD for. Does not need to be a PeftModel.
- **dataloader** (Iterable) -- The dataloader to use for the forward pass.
- **peft_config** (Optional[LoraConfig]) --
  The configuration for the LoRA layers. Only required if `model` is not a PeftModel.
- **forward_fn** (callable) --
  The forward function to use for the forward pass. Takes two arguments: `model` and `inputs`. Default
  behavior is `return model(**inputs)`
- **prepare_model_inputs_fn** (Optional[callable]) --
  This function receives the model inputs and the peft_config and passes the output to
  `prepare_layer_inputs_fn`. Can be used to modify the input to the SVD computation based on the original
  model inputs. For example for language modeling the attention mask is used to determine which indices are
  padding tokens and should not be used for SVD. Any function defined here expects two arguments:
  `model_input` and `peft_config`. `peft.tuners.lora.eva.prepare_model_inputs_fn_language_modeling` is used
  by default.
- **prepare_layer_inputs_fn** (Union[callable, Dict[str, callable], None]) --
  This function receives the layer inputs, the model inputs (potentially modified by
  `prepare_model_inputs_fn`) and the name of the layer and returns the inputs that should be used for SVD for
  that particular layer. Any custom function defined here expects three arguments: `layer_input`,
  `model_input`, and `layer_name` and should return a 2d tensor. The default logic can be found in
  peft.tuners.lora.eva.prepare_layer_inputs_fn_language_modeling and works for language modeling. In this
  case model_inputs is the mask used to determine which indices should be used for SVD (created by
  `prepare_model_inputs_fn_language_modeling`).
- **adapter_name** (str) -- The name of the adapter to compute the SVD for.
- **gather_distributed_inputs** (bool) --
  Whether to gather the layer inputs from all ranks. Default is True meaning in a distributed setting the
  layer inputs will be gathered from all ranks for the SVD computation. For non-distributed settings this
  argument is ignored. Set to False if you are using a non-distributed dataloader in a distributed setting.
- **show_progress_bar** (bool) -- Whether to show a progress bar. Default is True.</paramsdesc><paramgroups>0</paramgroups><rettype>eva_state_dict (dict)</rettype><retdesc>The state dictionary containing the SVD components for each layer.</retdesc></docstring>

Compute the SVD for each layer in the model.

This function computes the Singular Value Decomposition (SVD) for each layer in the model. It uses the incremental
PCA method to compute the SVD components. The function also checks for convergence of the computed components using
cosine similarity. The rank distribution for each layer is determined based on the explained variance ratio.








</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/lora.md" />

### LyCORIS
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/adapter_utils.md

# LyCORIS

[LyCORIS](https://hf.co/papers/2309.14859) (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The [LoHa](loha) and [LoKr](lokr) methods inherit from the `Lycoris` classes here.

## LycorisConfig[[peft.tuners.lycoris_utils.LycorisConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.lycoris_utils.LycorisConfig</name><anchor>peft.tuners.lycoris_utils.LycorisConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lycoris_utils.py#L35</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "rank_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "alpha_pattern", "val": ": Optional[dict] = <factory>"}]</parameters></docstring>

A base config for LyCORIS like adapters


</div>

## LycorisLayer[[peft.tuners.lycoris_utils.LycorisLayer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.lycoris_utils.LycorisLayer</name><anchor>peft.tuners.lycoris_utils.LycorisLayer</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lycoris_utils.py#L60</source><parameters>[{"name": "base_layer", "val": ": nn.Module"}]</parameters></docstring>

A base layer for LyCORIS like adapters



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>merge</name><anchor>peft.tuners.lycoris_utils.LycorisLayer.merge</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lycoris_utils.py#L114</source><parameters>[{"name": "safe_merge", "val": ": bool = False"}, {"name": "adapter_names", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **safe_merge** (`bool`, *optional*) --
  If `True`, the merge operation will be performed in a copy of the original weights and check for NaNs
  before merging the weights. This is useful if you want to check if the merge operation will produce
  NaNs. Defaults to `False`.
- **adapter_names** (`List[str]`, *optional*) --
  The list of adapter names that should be merged. If `None`, all active adapters will be merged.
  Defaults to `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Merge the active adapter weights into the base weights




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unmerge</name><anchor>peft.tuners.lycoris_utils.LycorisLayer.unmerge</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lycoris_utils.py#L168</source><parameters>[]</parameters></docstring>

This method unmerges all merged adapter layers from the base weights.


</div></div>

## LycorisTuner[[peft.tuners.lycoris_utils.LycorisTuner]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.tuners.lycoris_utils.LycorisTuner</name><anchor>peft.tuners.lycoris_utils.LycorisTuner</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/lycoris_utils.py#L194</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig)) -- The configuration of the Lora model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups></docstring>

A base tuner for LyCORIS like adapters




</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/adapter_utils.md" />

### AdaLoRA
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/adalora.md

# AdaLoRA

[AdaLoRA](https://hf.co/papers/2303.10512) is a method for optimizing the number of trainable parameters to assign to weight matrices and layers, unlike LoRA, which distributes parameters evenly across all modules. More parameters are budgeted for important weight matrices and layers while less important ones receive fewer parameters.

The abstract from the paper is:

*Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP. However, common practice fine-tunes all of the parameters in a pre-trained model, which becomes prohibitive when a large number of downstream tasks are present. Therefore, many fine-tuning methods are proposed to learn incremental updates of pre-trained weights in a parameter efficient way, e.g., low-rank increments. These methods often evenly distribute the budget of incremental updates across all pre-trained weight matrices, and overlook the varying importance of different weight parameters. As a consequence, the fine-tuning performance is suboptimal. To bridge this gap, we propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows us to effectively prune the singular values of unimportant updates, which is essentially to reduce their parameter budget but circumvent intensive exact SVD computations. We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA. Results demonstrate that AdaLoRA manifests notable improvement over baselines, especially in the low budget settings. Our code is publicly available at https://github.com/QingruZhang/AdaLoRA*.

## AdaLoraConfig[[peft.AdaLoraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AdaLoraConfig</name><anchor>peft.AdaLoraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adalora/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 8"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "lora_alpha", "val": ": int = 8"}, {"name": "lora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": Literal['none', 'all', 'lora_only'] = 'none'"}, {"name": "use_rslora", "val": ": bool = False"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_lora_weights", "val": ": bool | Literal['gaussian', 'eva', 'olora', 'pissa', 'pissa_niter_[number of iters]', 'corda', 'loftq', 'orthogonal'] = True"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "rank_pattern", "val": ": typing.Optional[dict] = None"}, {"name": "alpha_pattern", "val": ": Optional[dict] = <factory>"}, {"name": "megatron_config", "val": ": Optional[dict] = None"}, {"name": "megatron_core", "val": ": Optional[str] = 'megatron.core'"}, {"name": "trainable_token_indices", "val": ": Optional[Union[list[int], dict[str, list[int]]]] = None"}, {"name": "loftq_config", "val": ": Union[LoftQConfig, dict] = <factory>"}, {"name": "eva_config", "val": ": Optional[EvaConfig] = None"}, {"name": "corda_config", "val": ": Optional[CordaConfig] = None"}, {"name": "use_dora", "val": ": bool = False"}, {"name": "alora_invocation_tokens", "val": ": Optional[list[int]] = None"}, {"name": "use_qalora", "val": ": bool = False"}, {"name": "qalora_group_size", "val": ": int = 16"}, {"name": "layer_replication", "val": ": Optional[list[tuple[int, int]]] = None"}, {"name": "runtime_config", "val": ": LoraRuntimeConfig = <factory>"}, {"name": "lora_bias", "val": ": bool = False"}, {"name": "target_parameters", "val": ": Optional[list[str]] = None"}, {"name": "arrow_config", "val": ": Optional[ArrowConfig] = None"}, {"name": "ensure_weight_tying", "val": ": bool = False"}, {"name": "target_r", "val": ": int = 8"}, {"name": "init_r", "val": ": int = 12"}, {"name": "tinit", "val": ": int = 0"}, {"name": "tfinal", "val": ": int = 0"}, {"name": "deltaT", "val": ": int = 1"}, {"name": "beta1", "val": ": float = 0.85"}, {"name": "beta2", "val": ": float = 0.85"}, {"name": "orth_reg_weight", "val": ": float = 0.5"}, {"name": "total_step", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **target_r** (`int`) -- The target average rank of incremental matrix.
- **init_r** (`int`) -- The initial rank for each incremental matrix.
- **tinit** (`int`) -- The steps of initial fine-tuning warmup.
- **tfinal** (`int`) -- The number of steps of final fine-tuning.
- **deltaT** (`int`) -- The time internval between two budget allocations.
- **beta1** (`float`) -- The hyperparameter of EMA for sensitivity smoothing.
- **beta2** (`float`) -- The hyperparameter of EMA for undertainty quantification.
- **orth_reg_weight** (`float`) -- The coefficient of orthogonal regularization.
- **total_step** (`int`) -- The total training steps that should be specified before training.
- **rank_pattern** (`list`) -- The allocated rank for each weight matrix by RankAllocator.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a `~peft.AdaLora`.

AdaLoRA has three phases defined by `tinit`, `tfinal` and `total_step`.

The initial phase can be understood as a step for pre-training the adapters so that when reducing their rank, there
is already some information encoded that can be reduced instead of random matrices. This phase is defined by
supplying `tinit`.

After the initial phase is over (`tinit` steps have passed) and the final phase has not begun, AdaLoRA reduces the
budget of how much rank each layer is allowed to have with each step. This is where the reduction of rank is
happening. This goes on until `total_step - tfinal` steps are reached.

The last phase, beginning once `total_step - tfinal` steps are reached, does not change the layer ranks anymore but
fine-tunes the reduced-rank layers that resulted from the previous phase.

A practical example: `tinit` is 10, `tfinal` is 20, `total_step` is 100. We spend 10 steps doing pre-training
without rank reduction because our budget is constant (init phase), then we spend 80 (100-20) steps in the
reduction phase where our budget decreases step-wise and, finally, 20 steps in the final fine-tuning stage without
reduction.




</div>

## AdaLoraModel[[peft.AdaLoraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.AdaLoraModel</name><anchor>peft.AdaLoraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adalora/model.py#L37</source><parameters>[{"name": "model", "val": ""}, {"name": "config", "val": ""}, {"name": "adapter_name", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** ([*transformers.PreTrainedModel*]) -- The model to be adapted.
- **config** ([*AdaLoraConfig*]) -- The configuration of the AdaLora model.
- **adapter_name** (*str*) -- The name of the adapter, defaults to *"default"*.
- **low_cpu_mem_usage** (*bool*, *optional*, defaults to *False*) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>*torch.nn.Module*</rettype><retdesc>The AdaLora model.</retdesc></docstring>

Creates AdaLoRA (Adaptive LoRA) model from a pretrained transformers model. Paper:
https://openreview.net/forum?id=lq62uWRJjiY







<ExampleCodeBlock anchor="peft.AdaLoraModel.example">

Example:

```python
>>> from transformers import AutoModelForSeq2SeqLM >>> from peft import LoraConfig, AdaLoraModel, AdaLoraConfig
>>> config = AdaLoraConfig(
peft_type="ADALORA", task_type="SEQ_2_SEQ_LM", init_r=12, lora_alpha=32, target_modules=["q", "v"],
lora_dropout=0.01,
)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> model = AdaLoraModel(model, config, "default")
```

</ExampleCodeBlock>

    **Attributes**:
- **model** ([*transformers.PreTrainedModel*]) -- The model to be adapted.
- **peft_config** ([*AdaLoraConfig*]): The configuration of the AdaLora model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_weighted_adapter</name><anchor>peft.AdaLoraModel.add_weighted_adapter</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adalora/model.py#L344</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
This method is not supported for AdaLoRA, use LoRA instead.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_and_allocate</name><anchor>peft.AdaLoraModel.update_and_allocate</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/adalora/model.py#L302</source><parameters>[{"name": "global_step", "val": ""}]</parameters><paramsdesc>- **global_step** (`int`) -- The current training step, it is used to calculate adalora budget.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method updates Adalora budget and mask.

This should be called in every training step after `loss.backward()` and before `zero_grad()`.

`tinit`, `tfinal` and `deltaT` are handled with in the method.



<ExampleCodeBlock anchor="peft.AdaLoraModel.update_and_allocate.example">

Example:

```python
>>> loss = model(**input).loss
>>> loss.backward()
>>> optimizer.step()
>>> model.base_model.update_and_allocate(i_step)
>>> optimizer.zero_grad()
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/adalora.md" />

### Configuration
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/config.md

# Configuration

`PeftConfigMixin` is the base configuration class for storing the adapter configuration of a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel), and [PromptLearningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PromptLearningConfig) is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.

## PeftConfigMixin[[peft.config.PeftConfigMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.config.PeftConfigMixin</name><anchor>peft.config.PeftConfigMixin</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L77</source><parameters>[{"name": "task_type", "val": ": Optional[TaskType] = None"}, {"name": "peft_type", "val": ": Optional[PeftType] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}]</parameters><paramsdesc>- **peft_type** (Union[`~peft.utils.config.PeftType`, `str`]) -- The type of Peft method to use.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all
PEFT adapter models. This class inherits from [PushToHubMixin](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.utils.PushToHubMixin) which contains the methods to
push your model to the Hub. The method `save_pretrained` will save the configuration of your adapter model in a
directory. The method `from_pretrained` will load the configuration of your adapter model from a directory.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>check_kwargs</name><anchor>peft.config.PeftConfigMixin.check_kwargs</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L328</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>
Check kwargs before initializing the config instance.

Subclasses can override this method to add specific checks.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_json_file</name><anchor>peft.config.PeftConfigMixin.from_json_file</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L266</source><parameters>[{"name": "path_json_file", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **path_json_file** (`str`) --
  The path to the json file.</paramsdesc><paramgroups>0</paramgroups></docstring>

Loads a configuration file from a json file.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_peft_type</name><anchor>peft.config.PeftConfigMixin.from_peft_type</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L165</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **kwargs** (configuration keyword arguments) --
  Keyword arguments passed along to the configuration initialization.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method loads the configuration of your adapter model from a set of kwargs.

The appropriate configuration type is determined by the `peft_type` argument. If `peft_type` is not provided,
the calling class type is instantiated.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>peft.config.PeftConfigMixin.from_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L230</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": str"}, {"name": "subfolder", "val": ": Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str`) --
  The directory or the Hub repository id where the configuration is saved.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments passed along to the child class initialization.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method loads the configuration of your adapter model from a directory.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>peft.config.PeftConfigMixin.save_pretrained</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L132</source><parameters>[{"name": "save_directory", "val": ": str"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str`) --
  The directory where the configuration will be saved.
- **kwargs** (additional keyword arguments, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.utils.PushToHubMixin.push_to_hub)
  method.</paramsdesc><paramgroups>0</paramgroups></docstring>

This method saves the configuration of your adapter model in a directory.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>peft.config.PeftConfigMixin.to_dict</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L126</source><parameters>[]</parameters></docstring>

Returns the configuration for your adapter model as a dictionary.


</div></div>

## PeftConfig[[peft.PeftConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PeftConfig</name><anchor>peft.PeftConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L351</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}]</parameters><paramsdesc>- **peft_type** (Union[`~peft.utils.config.PeftType`, `str`]) -- The type of Peft method to use.
- **task_type** (Union[`~peft.utils.config.TaskType`, `str`]) -- The type of task to perform.
- **inference_mode** (`bool`, defaults to `False`) -- Whether to use the Peft model in inference mode.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the base configuration class to store the configuration of a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel).




</div>

## PromptLearningConfig[[peft.PromptLearningConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PromptLearningConfig</name><anchor>peft.PromptLearningConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/config.py#L371</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **num_virtual_tokens** (`int`) -- The number of virtual tokens to use.
- **token_dim** (`int`) -- The hidden embedding dimension of the base transformer model.
- **num_transformer_submodules** (`int`) -- The number of transformer submodules in the base transformer model.
- **num_attention_heads** (`int`) -- The number of attention heads in the base transformer model.
- **num_layers** (`int`) -- The number of layers in the base transformer model.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the base configuration class to store the configuration of `PrefixTuning`, [PromptEncoder](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoder), or
`PromptTuning`.




</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/config.md" />

### Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/cpt.md

# Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods

[CPT](https://huggingface.co/papers/2410.17222) combines In-Context Learning (ICL), Prompt Tuning (PT), and adversarial optimization to improve few-shot learning by refining context embeddings. CPT updates the context tokens by optimizing both the context and the training examples, encapsulating them into a novel loss design that minimizes overfitting, enables more effective optimization, and drives significant improvements in classification tasks.

[//]: # ([CPT]&#40;https://huggingface.co/papers/2410.17222&#41; for the paper)

The abstract from the paper is:

> Large Language Models (LLMs) can perform few-shot learning using either optimization-based approaches or In-Context Learning (ICL). Optimization-based methods often suffer from overfitting, as they require updating a large number of parameters with limited data. In contrast, ICL avoids overfitting but typically underperforms compared to optimization-based methods and is highly sensitive to the selection, order, and format of demonstration examples. To overcome these challenges, we introduce Context-aware Prompt Tuning (CPT), a method inspired by ICL, Prompt Tuning (PT), and adversarial attacks. CPT builds on the ICL strategy of concatenating examples before the input, extending it by incorporating PT-like learning to refine the context embedding through iterative optimization, extracting deeper insights from the training examples. Our approach carefully modifies specific context tokens, considering the unique structure of the examples within the context. In addition to updating the context with PT-like optimization, CPT draws inspiration from adversarial attacks, adjusting the input based on the labels present in the context while preserving the inherent value of the user-provided data. To ensure robustness and stability during optimization, we employ a projected gradient descent algorithm, constraining token embeddings to remain close to their original values and safeguarding the quality of the context. Our method has demonstrated superior accuracy across multiple classification tasks using various LLM models, outperforming existing baselines and effectively addressing the overfitting challenge in few-shot learning.


Take a look at [Example](https://github.com/huggingface/peft/blob/main/examples/cpt_finetuning/README.md) for a step-by-step guide on how to train a model with CPT.


## CPTConfig[[peft.CPTConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.CPTConfig</name><anchor>peft.CPTConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/config.py#L23</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "num_virtual_tokens", "val": ": int = None"}, {"name": "token_dim", "val": ": int = None"}, {"name": "num_transformer_submodules", "val": ": Optional[int] = None"}, {"name": "num_attention_heads", "val": ": Optional[int] = None"}, {"name": "num_layers", "val": ": Optional[int] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "cpt_token_ids", "val": ": typing.Optional[list[int]] = None"}, {"name": "cpt_mask", "val": ": typing.Optional[list[int]] = None"}, {"name": "cpt_tokens_type_mask", "val": ": typing.Optional[list[int]] = None"}, {"name": "opt_weighted_loss_type", "val": ": typing.Optional[typing.Literal['none', 'decay']] = 'none'"}, {"name": "opt_loss_decay_factor", "val": ": typing.Optional[float] = 1.0"}, {"name": "opt_projection_epsilon", "val": ": typing.Optional[float] = 0.1"}, {"name": "opt_projection_format_epsilon", "val": ": typing.Optional[float] = 0.1"}, {"name": "tokenizer_name_or_path", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

CPT Configuration class extending PeftConfig for Context-aware Prompt Tuning (CPT).

This class introduces additional parameters required for CPT, such as:
- Token type masks
- Prompt tuning initialization
- Loss weighting
- Projection settings

For more details, see the paper: https://huggingface.co/papers/2410.17222


</div>

## CPTEmbedding[[peft.CPTEmbedding]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.CPTEmbedding</name><anchor>peft.CPTEmbedding</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/model.py#L23</source><parameters>[{"name": "config", "val": ""}, {"name": "word_embeddings", "val": ""}]</parameters></docstring>

CPTEmbedding is a custom embedding layer designed for Context-aware Prompt Tuning (CPT) in PEFT. It initializes
embeddings, applies prompt-specific projections, and computes loss using label masks.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>calculate_loss</name><anchor>peft.CPTEmbedding.calculate_loss</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/model.py#L141</source><parameters>[{"name": "base_model_output", "val": ""}, {"name": "labels", "val": ""}, {"name": "cpt_type_mask", "val": ""}, {"name": "config", "val": ""}]</parameters><paramsdesc>- **base_model_output** (ModelOutput) --
  Output from the base model containing logits.
- **labels** (torch.Tensor) --
  Ground-truth labels for the input tokens.
- **cpt_type_mask** (torch.Tensor) --
  Token type mask used for filtering valid loss terms.
- **config** (Namespace) --
  Configuration object containing loss-related hyperparameters.</paramsdesc><paramgroups>0</paramgroups><rettype>ModelOutput</rettype><retdesc>The base model output with computed loss.</retdesc></docstring>

Computes the loss for CPT models with optional exponential decay.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>peft.CPTEmbedding.forward</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/model.py#L63</source><parameters>[{"name": "indices", "val": ""}]</parameters><paramsdesc>- **indices** (torch.Tensor) --
  Indices of the tokens to be embedded.</paramsdesc><paramgroups>0</paramgroups><rettype>torch.Tensor</rettype><retdesc>Sum of prompt embeddings and delta embeddings.</retdesc></docstring>

Computes the prompt embeddings and applies delta adjustments.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_projection</name><anchor>peft.CPTEmbedding.get_projection</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/model.py#L123</source><parameters>[]</parameters></docstring>

Applies epsilon-based projection to the delta embeddings to control their norm.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_updated_tokens</name><anchor>peft.CPTEmbedding.set_updated_tokens</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/cpt/model.py#L84</source><parameters>[]</parameters></docstring>

Sets up a backward hook to selectively update token gradients based on the CPT token type mask.


</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/cpt.md" />

### LayerNorm Tuning
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/layernorm_tuning.md

# LayerNorm Tuning

LayerNorm Tuning ([LN Tuning](https://huggingface.co/papers/2312.11420)) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model.
The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage.
However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers.
In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as `MLP` or `Attention` layers, this can be done by specifying the `target_modules` in the `LNTuningConfig`.

The abstract from the paper is:

*This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model.*

## LNTuningConfig[[peft.LNTuningConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LNTuningConfig</name><anchor>peft.LNTuningConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/ln_tuning/config.py#L24</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[Union[list[str], str]] = None"}]</parameters><paramsdesc>- **target_modules** (*Optional[Union[List[str], str]]*) --
  List of module names or regex expression of the module names to replace with LNTuning. For example,
  '.*decoder.*' or '.*encoder.*'. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (*Optional[Union[List[str], str]]*) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **modules_to_save** (*Optional[Union[List[str], str]]*) --
  List of modules to be set as trainable and saved in the final checkpoint. For example, in Sequence
  Classification or Token Classification tasks, the final layer *classifier/score* are randomly initialized
  and as such need to be trainable and saved.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [LNTuningModel](/docs/peft/v0.18.0.rc0/en/package_reference/layernorm_tuning#peft.LNTuningModel).




</div>

## LNTuningModel[[peft.LNTuningModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.LNTuningModel</name><anchor>peft.LNTuningModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/ln_tuning/model.py#L28</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** (`torch.nn.Module`) -- The model to be adapted.
- **config** ([LNTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/layernorm_tuning#peft.LNTuningConfig)) -- The configuration of the Lora model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  This option has no effect on LN tuning but exists for consistency with other PEFT methods.</paramsdesc><paramgroups>0</paramgroups><rettype>'torch.nn.Module'</rettype><retdesc>The adapted model with LayerNorm tuned on.</retdesc></docstring>

Creates LayerNorm tuning from a pretrained transformer model.

The method is described in detail in https://huggingface.co/papers/2312.11420.







<ExampleCodeBlock anchor="peft.LNTuningModel.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import get_peft_model, TaskType, LNTuningConfig

>>> peft_config = LNTuningConfig(
...     task_type=TaskType.CAUSAL_LM,
... )

>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
>>> model = get_peft_model(model, peft_config)
>>> model.print_trainable_parameters()
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([LNTuningConfig](/docs/peft/v0.18.0.rc0/en/package_reference/layernorm_tuning#peft.LNTuningConfig)): The configuration of the Lora model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/layernorm_tuning.md" />

### Polytropon
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/poly.md

# Polytropon

[Polytropon](https://hf.co/papers/2202.13914) is a multitask model with a number of different LoRA adapters in its "inventory". The model learns the correct combination of adapters from the inventory with a routing function to choose the best subset of modules for a specific task. PEFT also supports [Multi-Head Adapter Routing (MHR)](https://hf.co/papers/2211.03831) for Polytropon which builds on and improves the routing function by combining the adapter heads more granularly. The adapter heads are separated into disjoint blocks and a different routing function is learned for each one, allowing for more expressivity.

<hfoptions id="paper">
<hfoption id="Combining Modular Skills in Multitask Learning">

The abstract from the paper is:

*A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks. In this work, we assume that each task is associated with a subset of latent discrete skills from a (potentially small) inventory. In turn, skills correspond to parameter-efficient (sparse / low-rank) model parameterisations. By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills. To favour non-trivial soft partitions of skills across tasks, we experiment with a series of inductive biases, such as an Indian Buffet Process prior and a two-speed learning rate. We evaluate our latent-skill model on two main settings: 1) multitask reinforcement learning for grounded instruction following on 8 levels of the BabyAI platform; and 2) few-shot adaptation of pre-trained text-to-text generative models on CrossFit, a benchmark comprising 160 NLP tasks. We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to baselines with fully shared, task-specific, or conditionally generated parameters where knowledge is entangled across tasks. In addition, we show how discrete skills help interpretability, as they yield an explicit hierarchy of tasks.*

</hfoption>
<hfoption id="Multi-Head Adapter Routing for Cross-Task Generalization">

The abstract from the paper is:

*Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing), which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z), we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits higher gradient alignment between tasks than any other method. Since this implies that routing is only crucial during multi-task pre-training, we propose MHR-mu, which discards routing and fine-tunes the average of the pre-trained adapters during few-shot adaptation. This establishes MHR-mu as an effective method for single-adapter fine-tuning.*.

</hfoption>
</hfoptions>

## PolyConfig[[peft.PolyConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PolyConfig</name><anchor>peft.PolyConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/poly/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 8"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "poly_type", "val": ": Literal['poly'] = 'poly'"}, {"name": "n_tasks", "val": ": int = 1"}, {"name": "n_skills", "val": ": int = 4"}, {"name": "n_splits", "val": ": int = 1"}]</parameters><paramsdesc>- **r** (`int`) -- Attention dimension of each Lora in Poly.
- **target_modules** (`Union[List[str],str]`) -- The names of the modules to apply Poly to.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **modules_to_save** (`List[str]`) -- List of modules apart from Poly layers to be set as trainable
  and saved in the final checkpoint.
- **init_weights** (bool) -- Whether to perform initialization of Poly weights.
- **poly_type** (`Literal["poly"]`) -- The variant of the Poly module to use. Currently, only "poly"
  is supported.
- **n_tasks** (`int`) -- The number of tasks in a multitasking scenario.
- **n_skills** (`int`) -- The number of skills (LoRA) in each Poly layer.
- **n_splits** (`int`) -- The number of splits within each LoRA of a Poly layer. A value greater
  than 1 indicates the use of Multi-Head Routing (MHR).</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [PolyModel](/docs/peft/v0.18.0.rc0/en/package_reference/poly#peft.PolyModel).
- [Polytropon (Poly)](https://huggingface.co/papers/2202.13914)
- [Multi-Head Routing (MHR)](https://huggingface.co/papers/2211.03831)




</div>

## PolyModel[[peft.PolyModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.PolyModel</name><anchor>peft.PolyModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/poly/model.py#L28</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/poly.md" />

### OSF (Orthogonal Subspace Fine-tuning)
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/osf.md

# OSF (Orthogonal Subspace Fine-tuning)

Orthogonal Subspace Fine-tuning ([OSF](https://huggingface.co/papers/2504.07097)) is a PEFT method designed for continual learning that constrains parameter updates to be orthogonal to previously important directions. This approach enables full fine-tuning while preventing catastrophic forgetting without requiring additional parameters or storing previous gradients.

The abstract from the paper is:

*Continual learning in large language models (LLMs) is prone to catastrophic forgetting, where adapting to new tasks significantly degrades performance on previously learned ones. Existing methods typically rely on low-rank, parameter-efficient updates that limit the model's expressivity and introduce additional parameters per task, leading to scalability issues. To address these limitations, we propose a novel continual full fine-tuning approach leveraging adaptive singular value decomposition (SVD). Our method dynamically identifies task-specific low-rank parameter subspaces and constrains updates to be orthogonal to critical directions associated with prior tasks, thus effectively minimizing interference without additional parameter overhead or storing previous task gradients. We evaluate our approach extensively on standard continual learning benchmarks using both encoder-decoder (T5-Large) and decoder-only (LLaMA-2 7B) models, spanning diverse tasks including classification, generation, and reasoning. Empirically, our method achieves state-of-the-art results, up to 7% higher average accuracy than recent baselines like O-LoRA, and notably maintains the model's general linguistic capabilities, instruction-following accuracy, and safety throughout the continual learning process by reducing forgetting to near-negligible levels. Our adaptive SVD framework effectively balances model plasticity and knowledge retention, providing a practical, theoretically grounded, and computationally scalable solution for continual learning scenarios in large language models.*

## How OSF Works

OSF decomposes each weight matrix into high-rank (frozen) and low-rank (trainable) components using SVD:

```
W = U_high * S_high * V_high^T + U_low * S_low * V_low^T
```

Where:
- `U_high, S_high, V_high`: Preserve important directions from previous tasks (frozen)
- `U_low, S_low, V_low`: Allow adaptation to new tasks (trainable)

During training, gradients are projected to be orthogonal to the high-rank subspace, ensuring updates don't interfere with previously learned knowledge.

## Basic Usage

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import OSFConfig, get_peft_model

# Load base model
model = AutoModelForCausalLM.from_pretrained("gpt2")

# Configure OSF
config = OSFConfig(
    target_modules=["c_attn", "c_proj"],  # Target attention layers
    effective_rank=8,                     # Default rank for decomposition
    rank_pattern={"c_attn": 16}          # Override rank for specific modules
)

# Apply OSF
model = get_peft_model(model, config)

# Train as usual
optimizer = torch.optim.AdamW(model.parameters(), lr=3e-4)

tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token

inputs = tokenizer("Hello world", return_tensors="pt", padding=True)
loss = model(**inputs, labels=inputs.input_ids).loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
```

## Configuration Options

### Target Modules

You can specify target modules in several ways:

```python
# Specific module names
config = OSFConfig(target_modules=["q_proj", "k_proj", "v_proj", "o_proj"])

# All linear layers
config = OSFConfig(target_modules="all-linear")

# Model-specific defaults (automatically detected)
config = OSFConfig()  # Uses model-appropriate defaults
```

### Effective Rank Configuration

Control the preserved/trainable subspaces:

```python
# Global preserved rank (applies to all target modules)
config = OSFConfig(effective_rank=16)  # preserves top-16 singular directions; trains the rest

# Automatic preserved rank (50% of the smaller matrix dimension per target)
config = OSFConfig(effective_rank=None)

# Per-module preserved-rank overrides
config = OSFConfig(
    effective_rank=8,
    rank_pattern={
        "q_proj": 16,      # Higher rank for query projection
        "gate_proj": 4     # Lower rank for gate projection
    }
)
 
# Fractional preserved rank is supported (interpreted per-target as fraction * min_dim)
config = OSFConfig(effective_rank=0.8)  # preserve 80% of min_dim; train remaining 20%
config = OSFConfig(rank_pattern={"q_proj": 0.5})  # preserve 50% on q_proj, others use global/default
```

Note: OSF's `effective_rank` is the preserved (frozen) rank, not the trainable rank. The trainable rank equals `min(weight.shape) - effective_rank`. This differs from LoRA's `r`, which directly specifies the trainable rank.


## Training Advice for Continual Learning

### Sequential Task Learning

OSF is specifically designed for learning tasks sequentially. Between tasks, recompute the SVD so the preserved subspace reflects the latest weights. One simple way is to re-wrap the updated base model with OSF again:

```python
# Task 1: train on domain A with initial preserved subspace
r = 8  # initial effective rank to preserve
model = get_peft_model(base_model, OSFConfig(effective_rank=r))
train_task(model, task_1_data)

# Task 2: recompute SVD on updated weights and increase preserved subspace
base_model = model.unload()  # unwrap base model without assuming internals
r += 4  # grow preserved subspace to include Task 1 knowledge
model = get_peft_model(base_model, OSFConfig(effective_rank=r))
train_task(model, task_2_data)

# Task 3: recompute again and expand preserved subspace further
base_model = model.unload()
r += 4
model = get_peft_model(base_model, OSFConfig(effective_rank=r))
train_task(model, task_3_data)
```

### Budget Allocation for Task Sequences

When training on a known sequence of n tasks, one effective strategy is to progressively allocate model capacity to balance learning new tasks while preserving previous knowledge:

- **Task 1**: Use full capacity (train everything)
- **Task 2**: Freeze 1/n of model capacity, train remaining (n-1)/n capacity  
- **Task 3**: Freeze 2/n of model capacity, train remaining (n-2)/n capacity
- **Task n**: Freeze (n-1)/n of model capacity, use 1/n capacity for final task

This approach ensures each task gets adequate learning capacity while progressively preserving more knowledge from previous tasks.

```python
# Example: 4-task sequence with progressive budget allocation
n_tasks = 4
max_preserved_rank = 512  # Upper bound for preserved rank per target (heuristic)

for task_id in range(n_tasks):
    # Freeze increases over time; trainable capacity shrinks
    preserved_fraction = (task_id + 1) / n_tasks
    preserved_rank = int(max_preserved_rank * preserved_fraction)

    config = OSFConfig(
        target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
        effective_rank=preserved_rank,
    )

    print(
        f"Task {task_id + 1}: Preserving rank {preserved_rank} "
        f"({preserved_fraction:.1%} of max_preserved_rank - {max_preserved_rank} frozen); trainable rank = min_dim - preserved_rank"
    )

    model = get_peft_model(base_model, config)
    train_task(model, task_data[task_id])
```

### Best Practices

1. **Effective Rank Selection**: Start with `effective_rank=None` (auto sets rank to 50% of the smaller weight dimension per target module) and adjust based on task complexity
2. **Learning Rate**: Use smaller learning rates (1e-5 to 1e-4) compared to standard fine-tuning
3. **Task Importance**: Use `rank_pattern` to allocate more capacity to critical modules
4. **Model Architecture**: OSF works best with transformer architectures having clear attention and MLP separations
5. **Capacity Planning**: For known task sequences, use progressive budget allocation (1/n, 2/n, ..., (n-1)/n freezing) to balance plasticity and stability

### Memory Considerations

OSF modifies weights in-place and doesn't add parameters, making it memory-efficient:

```python
# Memory usage remains close to base model
print(f"Base model parameters: {base_model.num_parameters():,}")
print(f"OSF model parameters: {osf_model.num_parameters():,}")  # Similar count
```

## Advanced Usage

### Custom Target Modules

For models with non-standard architectures:

```python
config = OSFConfig(
    target_modules=["dense", "intermediate.dense"],  # Custom layer names
    effective_rank=12,
    rank_pattern={"dense": 8, "intermediate.dense": 16}
)
```

### Integration with Other Methods

OSF can be combined with other techniques:

```python
# Use with gradient checkpointing for memory efficiency
model.gradient_checkpointing_enable()

# Apply weight decay selectively (regularizes low-rank factors to limit drift/overfitting in continual updates; keep small)
optimizer = torch.optim.AdamW([
    {"params": [p for n, p in model.named_parameters() if "U_low" in n], "weight_decay": 0.01},
    {"params": [p for n, p in model.named_parameters() if "S_low" in n], "weight_decay": 0.001},
    {"params": [p for n, p in model.named_parameters() if "V_low" in n], "weight_decay": 0.01},
], lr=1e-4)
```

## OSFConfig[[peft.OSFConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.OSFConfig</name><anchor>peft.OSFConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/osf/config.py#L11</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "effective_rank", "val": ": Optional[Union[int, float]] = None"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "rank_pattern", "val": ": Optional[dict[str, Union[int, float]]] = None"}, {"name": "init_weights", "val": ": Optional[bool] = None"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "target_svd_config", "val": ": Optional[dict[str, int]] = None"}]</parameters><paramsdesc>- **effective_rank** (*int* or *float*, *optional*) --
  Preserved SVD rank ("high" subspace). The top-`effective_rank` singular directions are frozen and
  retained across tasks; the remaining dimensions form the trainable low-rank subspace. If *None*, defaults
  to 50% of the smaller weight dimension per target module. Note: This differs from LoRA's *r* (trainable
  rank). In OSF, the trainable rank is *min(weight.shape) - effective_rank*.
- **target_modules** (*Union[list[str], str]*, *optional*) --
  The names of the modules to apply OSF to. Can be a list of module names or *"all-linear"*.
- **rank_pattern** (*dict[str, int|float]*, *optional*) --
  A dictionary of regex patterns to override *effective_rank* for specific modules. Values can be absolute
  integers or fractions in (0, 1], interpreted as a fraction of the smaller matrix dimension per target.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for Orthogonal Subspace Fine-tuning (OSF).




</div>

## OSFModel[[peft.OSFModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.OSFModel</name><anchor>peft.OSFModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/osf/model.py#L14</source><parameters>[{"name": "model", "val": ""}, {"name": "config", "val": ""}, {"name": "adapter_name", "val": ""}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": dict[str, torch.Tensor] | None = None"}]</parameters></docstring>
A minimal tuner implementing Orthogonal Subspace Fine-tuning.

</div>

## Utility Functions

### Weight Decomposition[[peft.tuners.osf.utils.decompose_weight_matrix]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.osf.utils.decompose_weight_matrix</name><anchor>peft.tuners.osf.utils.decompose_weight_matrix</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/osf/utils.py#L42</source><parameters>[{"name": "weight", "val": ": torch.Tensor"}, {"name": "top_k", "val": ": int"}]</parameters></docstring>
Perform an SVD of `weight` and split it into frozen and trainable parts.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.osf.utils.reconstruct_weight_matrix</name><anchor>peft.tuners.osf.utils.reconstruct_weight_matrix</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/osf/utils.py#L62</source><parameters>[{"name": "svd_dict", "val": ": dict[str, torch.Tensor]"}]</parameters></docstring>
Reconstruct a weight matrix from its SVD components.

</div>

### Gradient Projection[[peft.tuners.osf.utils.project_gradient_to_orthogonal_space]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>peft.tuners.osf.utils.project_gradient_to_orthogonal_space</name><anchor>peft.tuners.osf.utils.project_gradient_to_orthogonal_space</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/osf/utils.py#L84</source><parameters>[{"name": "svd_dict", "val": ": dict[str, Any]"}]</parameters></docstring>
Project gradients of `U_low` and `V_low` to be orthogonal to the high rank space.

</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/osf.md" />

### Sparse High Rank Adapters
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/shira.md

# Sparse High Rank Adapters

Sparse High Rank Adapters or [SHiRA](https://arxiv.org/abs/2406.13175) is an alternate type of adapter and has been found to have significant advantages over the low rank adapters. Specifically, SHiRA achieves better accuracy than LoRA for a variety of vision and language tasks. It also offers simpler and higher quality multi-adapter fusion by significantly reducing concept loss, a common problem faced by low rank adapters. SHiRA directly finetunes a small number of the base model's parameters to finetune the model on any adaptation task.

SHiRA currently has the following constraint:

- Only `nn.Linear` layers are supported.

The abstract from the paper is:

> Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.

## ShiraConfig[[peft.ShiraConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.ShiraConfig</name><anchor>peft.ShiraConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/shira/config.py#L28</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 32"}, {"name": "mask_type", "val": ": Literal['random'] = 'random'"}, {"name": "random_seed", "val": ": Optional[int] = None"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "init_weights", "val": ": bool = True"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}]</parameters><paramsdesc>- **r** (`int`, *optional*, defaults to `32`) --
  For a given target module, the number of SHiRA parameters is computed as r(m+n), where the original tensor
  dimensions are m x n. This means the number of SHiRA parameters is the same as that for a LoRA adapter.
  SHiRA is a high rank adapter. Setting this r parameter does not restrict the rank to this value.
- **mask_type** (`str`, defaults to `random`) --
  Type of mask function. Defaults to a random sparse mask. An optional user-defined mask_fn to compute the
  mask value can also be supplied by instantiating `config = ShiraConfig(...)` and then setting
  `config.mask_fn = <your custom mask function>`. For a pretrained weight with shape m x n, the custom mask
  function must return only one mask (shape: m x n) which must be binary 0 or 1 with num_shira_parameters =
  r(m + n) for linear layers. Device and dtype of mask must be same as base layer's weight's device and
  dtype. Please see mask_functions.py for more details and to see the default random sparse mask
  implementation.
- **random_seed** (`int`, *optional*, defaults to `None`) --
  random seed for the torch generator for random_mask.
- **target_modules** (`Union[List[str], str]`) --
  List of module names or regex expression of the module names to replace with SHiRA. For example, ['q', 'v']
  or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$'. Only linear layers are supported.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **init_weights** (`bool`, defaults to `True`) --
  Initialize SHiRA weight to have zero values. If set to False, SHiRA weights are initialized to randn values
  instead of zeros and this is used only for testing.
- **modules_to_save** (`List[str]`) --
  List of modules apart from SHiRA layers to be set as trainable and saved in the final checkpoint.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [ShiraModel](/docs/peft/v0.18.0.rc0/en/package_reference/shira#peft.ShiraModel).




</div>

## ShiraModel[[peft.ShiraModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.ShiraModel</name><anchor>peft.ShiraModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/shira/model.py#L29</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **config** ([ShiraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/shira#peft.ShiraConfig)) -- The configuration of the SHiRA model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The SHiRA model.</retdesc></docstring>

Creates a Sparse High Rank Adapter (SHiRA) Model from a pretrained model.







<ExampleCodeBlock anchor="peft.ShiraModel.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import ShiraConfig, get_peft_model

>>> base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> config = ShiraConfig(r=32)
>>> model = get_peft_model(base_model, config)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([ShiraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/shira#peft.ShiraConfig)): The configuration of the SHiRA model.


</div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/shira.md" />

### VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks
https://huggingface.co/docs/peft/v0.18.0.rc0/package_reference/vblora.md

# VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks

## Overview

[VB-LoRA](https://huggingface.co/papers/2405.15179) is a parameter-efficient fine-tuning technique that extends LoRA by learning a fine-grained parameter-sharing scheme at the sub-vector level, achieving significantly higher parameter efficiency. This makes VB-LoRA especially useful in scenarios where storage and transmission costs are critical. It works by decomposing low-rank matrices—from different layers and modules such as K, Q, V, and FFN—into sub-vectors, which are then globally shared through a vector bank.

The abstract from the paper is:

*As the adoption of large language models increases and the need for per-user or per-task model customization grows, the parameter-efficient fine-tuning (PEFT) methods, such as low-rank adaptation (LoRA) and its variants, incur substantial storage and transmission costs. To further reduce stored parameters, we introduce a "divide-and-share" paradigm that breaks the barriers of low-rank decomposition across matrix dimensions, modules and layers by sharing parameters globally via a vector bank. As an instantiation of the paradigm to LoRA, our proposed VB-LoRA composites all the low-rank matrices of LoRA from a shared vector bank with a differentiable top-k admixture module. VB-LoRA achieves extreme parameter efficiency while maintaining comparable or better performance compared to state-of-the-art PEFT methods. Extensive experiments demonstrate the effectiveness of VB-LoRA on natural language understanding, natural language generation, and instruction tuning tasks. When fine-tuning the Llama2-13B model, VB-LoRA only uses 0.4% of LoRA's stored parameters, yet achieves superior results.*

## Usage Tips

- VB-LoRA utilizes a sparse top-k module to learn the sharing machanism. When saving adapter parameters, you can either save only the top-k weights and their indices by setting `save_only_topk_weights = True` in `VBLoRAConfig`, or save all the trainable logits by setting it to `False`. Enabling `save_only_topk_weights = True` significantly reduces storage space; for instance, in Llama2-7B, the storage file size decreases from 308MB to 2.5MB. Note that models saved with `save_only_topk_weights = True` are intended for merging or inference only and cannot be used to resume training.

- VB-LoRA has two sets of training parameters: vector bank parameters and logit parameters. In practice, we found that logit parameters require a higher learning rate, while vector bank parameters require a lower learning rate. When using the AdamW optimizer, typical learning rates are 0.01 for logits and 0.001 for vector bank parameters.

## VBLoRAConfig[[peft.VBLoRAConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.VBLoRAConfig</name><anchor>peft.VBLoRAConfig</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vblora/config.py#L25</source><parameters>[{"name": "task_type", "val": ": Optional[Union[str, TaskType]] = None"}, {"name": "peft_type", "val": ": Optional[Union[str, PeftType]] = None"}, {"name": "auto_mapping", "val": ": Optional[dict] = None"}, {"name": "peft_version", "val": ": Optional[str] = None"}, {"name": "base_model_name_or_path", "val": ": Optional[str] = None"}, {"name": "revision", "val": ": Optional[str] = None"}, {"name": "inference_mode", "val": ": bool = False"}, {"name": "r", "val": ": int = 4"}, {"name": "num_vectors", "val": ": int = 256"}, {"name": "vector_length", "val": ": int = 256"}, {"name": "topk", "val": ": int = 2"}, {"name": "target_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "exclude_modules", "val": ": Optional[Union[list[str], str]] = None"}, {"name": "save_only_topk_weights", "val": ": bool = False"}, {"name": "vblora_dropout", "val": ": float = 0.0"}, {"name": "fan_in_fan_out", "val": ": bool = False"}, {"name": "bias", "val": ": str = 'none'"}, {"name": "modules_to_save", "val": ": Optional[list[str]] = None"}, {"name": "init_vector_bank_bound", "val": ": float = 0.02"}, {"name": "init_logits_std", "val": ": float = 0.1"}, {"name": "layers_to_transform", "val": ": Optional[Union[list[int], int]] = None"}, {"name": "layers_pattern", "val": ": Optional[Union[list[str], str]] = None"}]</parameters><paramsdesc>- **r** (`int`) --
  The rank of incremental matrices.
- **num_vectors** (`int`) --
  Number of vectors in the vector bank. Use higher values when the model size increases.
- **vector_length** (`int`) --
  The length of the vectors in the vector bank. The length of the vectors should be divisible by the hidden
  dimension of the model.
- **topk** (`int`) --
  The K value for top-K selection. A larger value of K increases the size of the saved model. In practice,
  setting K=2 typically provides the best performance and parameter efficiency. For more details, refer to
  the discussion in the paper.
- **target_modules** (`Union[List[str], str]`) --
  The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
  names will be replaced. When passing a string, a regex match will be performed. When passing a list of
  strings, either an exact match will be performed or it is checked if the name of the module ends with any
  of the passed strings. If this is specified as 'all-linear', then all linear/Conv1D modules are chosen,
  excluding the output layer. If this is not specified, modules will be chosen according to the model
  architecture. If the architecture is not known, an error will be raised -- in this case, you should specify
  the target modules manually.
- **exclude_modules** (`Optional[Union[List[str], str]]`) --
  The names of the modules to not apply the adapter. When passing a string, a regex match will be performed.
  When passing a list of strings, either an exact match will be performed or it is checked if the name of the
  module ends with any of the passed strings.
- **save_only_topk_weights** (`bool`) --
  Whether to only save the topk weights. Setting `save_only_topk_weights = True` significantly reduces
  storage space. However, models saved in this mode can be used for merging or inference only, not for
  resuming training.
- **vblora_dropout** (`float`) --
  The dropout probability for VBLoRA layers.
- **fan_in_fan_out** (`bool`) --
  Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
  `Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
- **bias** (`str`) --
  Bias type for VBLoRA. Can be 'none', 'all' or 'vblora_only'. If 'all' or 'vblora_only', the corresponding
  biases will be updated during training. Be aware that this means that, even when disabling the adapters,
  the model will not produce the same output as the base model would have without adaptation.
- **modules_to_save** (`List[str]`) --
  List of modules apart from VBLoRA layers to be set as trainable and saved in the final checkpoint.
- **init_vector_bank_bound** (`float`) --
  The vector bank is initialized with a uniform distribution between -init_vector_bank_bound and
  init_vector_bank_bound. Avoid initializing the vector bank with all zeros to prevent zero gradients. A
  small value, such as 0.02, is typically effective. Initializing with a large value may cause training
  instability.
- **init_logits_std** (`float`) --
  The logits are initialized with a normal distribution with a standard deviation of init_logits_std. Default
  is 0.1.
- **layers_to_transform** (`Union[List[int],int]`) --
  The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
  that are specified in this list. If a single integer is passed, it will apply the transformations on the
  layer at this index.
- **layers_pattern** (`Optional[Union[List[str], str]]`) --
  The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the
  `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is the configuration class to store the configuration of a [VBLoRAConfig](/docs/peft/v0.18.0.rc0/en/package_reference/vblora#peft.VBLoRAConfig).

Paper: https://huggingface.co/papers/2405.15179




</div>

## VBLoRAModel[[peft.VBLoRAModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class peft.VBLoRAModel</name><anchor>peft.VBLoRAModel</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vblora/model.py#L29</source><parameters>[{"name": "model", "val": ""}, {"name": "peft_config", "val": ": Union[PeftConfig, dict[str, PeftConfig]]"}, {"name": "adapter_name", "val": ": str"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "state_dict", "val": ": Optional[dict[str, torch.Tensor]] = None"}]</parameters><paramsdesc>- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **config** ([VBLoRAConfig](/docs/peft/v0.18.0.rc0/en/package_reference/vblora#peft.VBLoRAConfig)) -- The configuration of the VBLoRA model.
- **adapter_name** (`str`) -- The name of the adapter, defaults to `"default"`.
- **low_cpu_mem_usage** (`bool`, `optional`, defaults to `False`) --
  Create empty adapter weights on meta device. Useful to speed up the loading process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.nn.Module`</rettype><retdesc>The VBLoRA model.</retdesc></docstring>

Creates VBLoRA model from a pretrained transformers model.

The method is described in detail in https://huggingface.co/papers/2405.15179.







<ExampleCodeBlock anchor="peft.VBLoRAModel.example">

Example:

```py
>>> from transformers import AutoModelForCausalLM
>>> from peft import VBLoRAConfig, get_peft_model

>>> base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
>>> config = VBLoRAConfig(
...     task_type="SEQ_CLS",
...     r=4,
...     target_modules=["fc1", "fc2", "k_proj", "out_proj", "q_proj", "v_proj"],
...     num_vectors=60,
...     vector_length=256,
...     save_only_topk_weights=True,
... )
>>> model = get_peft_model(base_model, config)
```

</ExampleCodeBlock>

**Attributes**:
- **model** ([PreTrainedModel](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel)) -- The model to be adapted.
- **peft_config** ([VBLoRAConfig](/docs/peft/v0.18.0.rc0/en/package_reference/vblora#peft.VBLoRAConfig)): The configuration of the VBLoRAConfig model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_nb_savable_parameters</name><anchor>peft.VBLoRAModel.get_nb_savable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vblora/model.py#L165</source><parameters>[{"name": "adapter", "val": " = 'default'"}]</parameters></docstring>

Returns the number of savable VB-LoRA parameters and other savable parameters.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>print_savable_parameters</name><anchor>peft.VBLoRAModel.print_savable_parameters</anchor><source>https://github.com/huggingface/peft/blob/v0.18.0.rc0/src/peft/tuners/vblora/model.py#L201</source><parameters>[]</parameters></docstring>

Prints the number of savable VB-LoRA parameters and total savable parameters.


</div></div>

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/package_reference/vblora.md" />

### IA3
https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/ia3.md

# IA3 

This conceptual guide gives a brief overview of [IA3](https://huggingface.co/papers/2205.05638), a parameter-efficient fine tuning technique that is 
intended to improve over [LoRA](./lora).

To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) 
rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules 
in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original 
weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA)
keeps the number of trainable parameters much smaller. 

Being similar to LoRA, IA3 carries many of the same advantages: 

* IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%)
* The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them.
* Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models.
* IA3 does not add any inference latency because adapter weights can be merged with the base model.

In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
parameters. Following the authors' implementation, IA3 weights are added to the key, value and feedforward layers
of a Transformer model. To be specific, for transformer models, IA3 weights are added to the outputs of key and value layers, and to the input of the second feedforward layer
in each transformer block.

Given the target layers for injecting IA3 parameters, the number of trainable parameters
can be determined based on the size of the weight matrices.


## Common IA3 parameters in PEFT

As with other methods supported by PEFT, to fine-tune a model using IA3, you need to:

1. Instantiate a base model.
2. Create a configuration (`IA3Config`) where you define IA3-specific parameters.
3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
4. Train the `PeftModel` as you normally would train the base model.

`IA3Config` allows you to control how IA3 is applied to the base model through the following parameters:

- `target_modules`: The modules (for example, attention blocks) to apply the IA3 vectors.
- `feedforward_modules`: The list of modules to be treated as feedforward layers in `target_modules`. While learned vectors are multiplied with
the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. Note that `feedforward_modules` must be a subset of `target_modules`.
- `modules_to_save`: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.

## Example Usage

For the task of sequence classification, one can initialize the IA3 config for a Llama model as follows:

```py
peft_config = IA3Config(
    task_type=TaskType.SEQ_CLS, target_modules=["k_proj", "v_proj", "down_proj"], feedforward_modules=["down_proj"]
)
```

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/conceptual_guides/ia3.md" />

### Soft prompts
https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/prompting.md

# Soft prompts

Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as *prompting*. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model's parameters.

There are two categories of prompting methods:

- hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt
- soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren't human readable because you aren't matching these "virtual tokens" to the embeddings of a real word

This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, P-tuning, and multitask prompt tuning.

## Prompt tuning

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prompt-tuning.png"/>
</div>
<small>Only train and store a significantly smaller set of task-specific prompt parameters <a href="https://hf.co/papers/2104.08691">(image source)</a>.</small>

[Prompt tuning](https://hf.co/papers/2104.08691) was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are *generated*. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters.

The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model's parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases.

Take a look at [Prompt tuning for causal language modeling](../task_guides/clm-prompt-tuning) for a step-by-step guide on how to train a model with prompt tuning.

## Prefix tuning

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prefix-tuning.png"/>
</div>
<small>Optimize the prefix parameters for each task <a href="https://hf.co/papers/2101.00190">(image source)</a>.</small>

[Prefix tuning](https://hf.co/papers/2101.00190) was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model's parameters frozen. 

The main difference is that the prefix parameters are inserted in **all** of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts.

As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings.

Take a look at [Prefix tuning for conditional generation](../task_guides/seq2seq-prefix-tuning) for a step-by-step guide on how to train a model with prefix tuning.

## P-tuning

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/p-tuning.png"/>
</div>
<small>Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder <a href="https://hf.co/papers/2103.10385">(image source)</a>.</small>

[P-tuning](https://hf.co/papers/2103.10385) is designed for natural language understanding (NLU) tasks and all language models. 
It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though:

- the prompt tokens can be inserted anywhere in the input sequence, and it isn't restricted to only the beginning
- the prompt tokens are only added to the input instead of adding them to every layer of the model
- introducing *anchor* tokens can improve performance because they indicate characteristics of a component in the input sequence

The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks.

Take a look at [P-tuning for sequence classification](../task_guides/ptuning-seq-classification) for a step-by-step guide on how to train a model with P-tuning.

## Multitask prompt tuning

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt.png"/>
</div>
<small><a href="https://hf.co/papers/2303.02861">Multitask prompt tuning enables parameter-efficient transfer learning</a>.</small>

[Multitask prompt tuning (MPT)](https://hf.co/papers/2303.02861) learns a single prompt from data for multiple task types that can be shared for different target tasks. Other existing approaches learn a separate soft prompt for each task that need to be retrieved or aggregated for adaptation to target tasks. MPT consists of two stages:

1. source training - for each task, its soft prompt is decomposed into task-specific vectors. The task-specific vectors are multiplied together to form another matrix W, and the Hadamard product is used between W and a shared prompt matrix P to generate a task-specific prompt matrix. The task-specific prompts are distilled into a single prompt matrix that is shared across all tasks. This prompt is trained with multitask training.
2. target adaptation - to adapt the single prompt for a target task, a target prompt is initialized and expressed as the Hadamard product of the shared prompt matrix and the task-specific low-rank prompt matrix.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt-decomposition.png"/>
</div>
<small><a href="https://hf.co/papers/2103.10385">Prompt decomposition</a>.</small>


## Context-Aware Prompt Tuning (CPT)

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/cpt.png"/>
</div>
<small>CPT optimizing only specific token embeddings while keeping the rest of the model frozen <a href="https://huggingface.co/papers/2410.17222">(image source)</a>.</small>

[Context-Aware Prompt Tuning (CPT)](https://huggingface.co/papers/2410.17222) is designed to enhance few-shot classification by refining only context embeddings. 
This approach combines ideas from In-Context Learning (ICL), Prompt Tuning (PT), and adversarial optimization, focusing on making model adaptation both parameter-efficient and effective.
In CPT, only specific context token embeddings are optimized, while the rest of the model remains frozen. 
To prevent overfitting and maintain stability, CPT uses controlled perturbations to limit the allowed changes to context embeddings within a defined range. 
Additionally, to address the phenomenon of recency bias—where examples near the end of the context tend to be prioritized over earlier ones—CPT applies a decay loss factor.

Take a look at [Example](https://github.com/huggingface/peft/blob/main/examples/cpt_finetuning/README.md) for a step-by-step guide on how to train a model with CPT.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/conceptual_guides/prompting.md" />

### Orthogonal Finetuning (OFT and BOFT)
https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/oft.md

# Orthogonal Finetuning (OFT and BOFT) 

This conceptual guide gives a brief overview of [OFT](https://huggingface.co/papers/2306.07280), [OFTv2](https://www.arxiv.org/abs/2506.19847) and [BOFT](https://huggingface.co/papers/2311.06243), a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices.

To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn't receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor.

Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below.

<div class="flex justify-center">
    <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/BOFT_comparison.png"/>
</div>


BOFT has some advantages compared to LoRA: 

* BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency.
* Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the [hyperspherical energy](https://huggingface.co/papers/1805.09298) unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge.
* BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class).
* The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization.

In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices.

## Merge OFT/BOFT weights into the base model

Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.

<div class="flex justify-center">
    <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/boft_merge.png"/>
</div>

This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent.

## Utils for OFT / BOFT

### Common OFT / BOFT parameters in PEFT

As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to:

1. Instantiate a base model.
2. Create a configuration (`OFTConfig` or `BOFTConfig`) where you define OFT/BOFT-specific parameters.
3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
4. Train the `PeftModel` as you normally would train the base model.


### OFT-specific parameters

`OFTConfig` allows you to control how OFT is applied to the base model through the following parameters:

- `r`: OFT rank, number of OFT blocks per injected layer. **Bigger** `r` results in more sparse update matrices with **fewer** trainable paramters. **Note**: You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `r = 0`, the user is advised to set the `oft_block_size` instead for better clarity.
- `oft_block_size`: OFT block size across different layers. **Bigger** `oft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**: Please choose `oft_block_size` to be divisible by layer's input dimension (`in_features`), e.g., 4, 8, 16. You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `oft_block_size = 32`. 
- `use_cayley_neumann`: Specifies whether to use the Cayley-Neumann parameterization (efficient but approximate) or the vanilla Cayley parameterization (exact but computationally expensive because of matrix inverse). We recommend to set it to `True` for better efficiency, but performance may be slightly worse because of the approximation error. Please test both settings (`True` and `False`) depending on your needs. Default is `False`.
- `module_dropout`: The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the dropout layer in LoRA.
- `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"oft_only"`.
- `target_modules`: The modules (for example, attention blocks) to inject the OFT matrices.
- `modules_to_save`: List of modules apart from OFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.

### BOFT-specific parameters

`BOFTConfig` allows you to control how BOFT is applied to the base model through the following parameters:

- `boft_block_size`: the BOFT matrix block size across different layers, expressed in `int`. **Bigger** `boft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**, please choose `boft_block_size` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only 
specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
- `boft_block_num`: the number of BOFT matrix blocks across different layers, expressed in `int`. **Bigger** `boft_block_num` result in sparser update matrices with **fewer** trainable parameters. **Note**, please choose `boft_block_num` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only 
specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
- `boft_n_butterfly_factor`: the number of butterfly factors. **Note**, for `boft_n_butterfly_factor=1`, BOFT is the same as vanilla OFT, for `boft_n_butterfly_factor=2`, the effective block size of OFT becomes twice as big and the number of blocks become half.
- `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"boft_only"`.
- `boft_dropout`: specify the probability of multiplicative dropout.
- `target_modules`: The modules (for example, attention blocks) to inject the OFT/BOFT matrices.
- `modules_to_save`: List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.



## OFT Example Usage

For using OFT for quantized finetuning with [TRL](https://github.com/huggingface/trl) for `SFT`, `PPO`, or `DPO` fine-tuning, follow the following outline:

```py
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from trl import SFTTrainer
from peft import OFTConfig

if use_quantization:
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.bfloat16,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_storage=torch.bfloat16,
    )

model = AutoModelForCausalLM.from_pretrained(
    "model_name", 
    quantization_config=bnb_config
)
tokenizer = AutoTokenizer.from_pretrained("model_name")

# Configure OFT
peft_config = OFTConfig(
    oft_block_size=32,
    use_cayley_neumann=True,
    target_modules="all-linear",
    bias="none",
    task_type="CAUSAL_LM"
)

trainer = SFTTrainer(
    model=model,
    train_dataset=ds['train'],
    peft_config=peft_config,
    processing_class=tokenizer,
    args=training_arguments,
    data_collator=collator,
)

trainer.train()
```


## BOFT Example Usage

For an example of the BOFT method application to various downstream tasks, please refer to the following guides:

Take a look at the following step-by-step guides on how to finetune a model with BOFT:
- [Dreambooth finetuning with BOFT](https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md)
- [Controllable generation finetuning with BOFT (ControlNet)](https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md)

For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows:

```py
import transformers
from transformers import AutoModelForSeq2SeqLM, BOFTConfig
from peft import BOFTConfig, get_peft_model

config = BOFTConfig(
    boft_block_size=4,
    boft_n_butterfly_factor=2,
    target_modules=["query", "value", "key", "output.dense", "mlp.fc1", "mlp.fc2"],
    boft_dropout=0.1,
    bias="boft_only",
    modules_to_save=["classifier"],
)

model = transformers.Dinov2ForImageClassification.from_pretrained(
    "facebook/dinov2-large",
    num_labels=100,
)

boft_model = get_peft_model(model, config)
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/conceptual_guides/oft.md" />

### Adapters
https://huggingface.co/docs/peft/v0.18.0.rc0/conceptual_guides/adapter.md

# Adapters

Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources.

This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper).

## Low-Rank Adaptation (LoRA)

> [!TIP]
> LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness.

As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory.

LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/>
</div>

This approach has a number of advantages:

* LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters.
* The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
* LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them.
* Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models.

In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/>
</div>
<small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small>

## Mixture of LoRA Experts (X-LoRA)

[X-LoRA](https://huggingface.co/papers/2402.07148) is a mixture of experts method for LoRA which works by using dense or sparse gating to dynamically activate LoRA experts. The LoRA experts as well as the base model are frozen during training, resulting in a low parameter count as only the gating layers must be trained. In particular, the gating layers output scalings which (depending on config) are granular on the layer and token level. Additionally, during inference, X-LoRA dynamically activates LoRA adapters to recall knowledge and effectively mix them:

The below graphic demonstrates how the scalings change for different prompts for each token. This highlights the activation of different adapters as the generation progresses and the sequence creates new context.

![Token-by-token scalings](https://github.com/EricLBuehler/xlora/raw/master/res/token_by_token_scalings.gif)

For each step, X-LoRA requires the base model to be run twice: first, to get hidden states without any LoRA adapters, and secondly, the hidden states are used to calculate scalings which are applied to the LoRA adapters and the model is run a second time. The output of the second run is the result of the model step.

Ultimately, X-LoRA allows the model to reflect upon its knowledge because of the dual forward pass scheme, and dynamically reconfigure the architecture.

## Low-Rank Hadamard Product (LoHa)

Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT.

LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity.

## Low-Rank Kronecker Product (LoKr)

[LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W.

## Orthogonal Finetuning (OFT)

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/>
</div>
<small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small>

[OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).

OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.

## Orthogonal Butterfly (BOFT)

[BOFT](https://hf.co/papers/2311.06243) is an improved orthogonal finetuning method that focuses on preserving a pretrained model's generative capabilities while being significantly more parameter-efficient than standard OFT. Like OFT, BOFT maintains the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer by applying an orthogonal transformation to the pretrained weight matrix, ensuring the semantic relationships among neurons are preserved.

Instead of using a block-diagonal orthogonal matrix, BOFT factorizes the orthogonal transformation into a product of **sparse butterfly matrices** (originally introduced in the [Cooley–Tukey FFT](https://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm)). Unlike OFT's block-diagonal rotations, which only mix inputs within each block, the butterfly structure guarantees that every input can influence every output, producing a **dense connectivity** with just `O(d log d)` parameters. This factorization preserves expressivity while drastically reducing the parameter count compared to OFT (at the expense of computation time).

In practice, BOFT multiplies each pretrained weight matrix by a sequence of butterfly-structured orthogonal factors, enabling efficient and expressive neuron rotations. This makes BOFT well-suited for controllable generation and tasks where maintaining the pretrained model's subject representation is critical, while also scaling to larger models with lower memory and compute overhead.

## Adaptive Low-Rank Adaptation (AdaLoRA)

[AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning.

Training with AdaLoRA has three phases: the init phase, the budgeting phase and the final phase. In the initial phase, no budgeting is applied, therefore the ranks are not touched. During the budgeting phase the process described above is applied and the rank is redistributed according to a budget, aiming to give more important adapters more rank and less important layers less. When reaching the final phase, budgeting has ended, the ranks are redistributed but we may continue training for a while with the redistributed ranks to further improve performance.

## Llama-Adapter

[Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into an instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset.

A set of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/>
</div>
<small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small>

To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.

## Householder Reflection Adaptation (HRA)

[HRA](https://huggingface.co/papers/2405.17484) provides a new perspective connecting LoRA to OFT, which means it can harness the advantages of both strategies, reduce parameters and computation costs while penalizing the loss of pre-training knowledge. 

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/hra.png"/>
</div>
<small><a href="https://huggingface.co/papers/2405.17484">Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation</a></small>

HRA constructs a chain of `r` trainable Householder reflections (HRs). Because the Householder reflection matrix is an orthogonal matrix and the product of orthogonal matrices is also an orthogonal matrix, HRA satisfies the theoretical guarantee of Orthogonal Finetuning (OFT). Meanwhile, HRA can also be viewed as a low-rank fine-tuning adapter by rewriting formula. 

The higher `r`, the more trainable parameters, resulting in a larger model capacity and better performance. Besides, due to the chain structure, the orthogonality of HR planes impacts the capacity and regularity of HRA. To achieve a trade-off between the model capacity and regularity, an orthogonality regularizer of the HR planes is added to the loss function. The weight \\(\lambda\\) can control the strength of the regularizer. 

## Bone
[MiSS](https://huggingface.co/papers/2409.15371) New version of paper(MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing)
If you already have a Bone checkpoint, you can use `/scripts/convert-bone-to-miss.py` to convert it into a MiSS checkpoint and proceed with training using MiSS.

## MiSS
[MiSS](https://huggingface.co/papers/2409.15371) MiSS (Matrix Shard Sharing) is a novel Parameter-Efficient Fine-Tuning (PEFT) method designed to address the trade-off between adaptability and efficiency in Large Language Models. The core approach of MiSS involves a simple shard-sharing mechanism. It achieves low-rank adaptation by decomposing a weight matrix into multiple fragments and then utilizing a shared, trainable "common fragment." The final low-rank update matrix is constructed by replicating these shared, partitioned shards. (MiSS is a novel PEFT method that adopts a low-rank structure, requires only a single trainable matrix, and introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.)

<small><a href="https://huggingface.co/papers/2409.15371">MiSS: Balancing LoRA Performance and Efficiency with Simple Shard Sharing</a></small>

Intuitively, the shape of a single trainable matrix in MiSS is consistent with `lora_B`, so the `r` parameter in MiSS is less than the `r` in LoRA by (`in_feature * r`).

Note: Bat's r (b) is special and requires that weight W satisfies the conditions `in_features % r == 0` and `out_features % r == 0`. Additionally, when `in_features == out_features` and MiSS-r equals LoRA-r, MiSS's number of trainable parameters is only half that of LoRA.

Although the nonlinear updates of Bat bring some performance improvements, they also increase computational overhead. Its main purpose is to provide researchers with a direction for improvement. Therefore, we recommend fine-tuning the comprehensive MiSS model instead.

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/conceptual_guides/adapter.md" />

### Adapter injection
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/low_level_api.md

# Adapter injection

With PEFT, you can inject trainable adapters into any `torch` module which allows you to use adapter methods without relying on the modeling classes in PEFT. This works for all adapters except for those based on prompt learning (e.g. prefix tuning or p-tuning).

Check the table below to see when you should inject adapters.

| Pros | Cons |
|---|---|
| the model is modified inplace, keeping all the original attributes and methods | manually write the `from_pretrained` and `save_pretrained` utility functions from Hugging Face to save and load adapters |
| works for any `torch` module and modality | doesn't work with any of the utility methods provided by `PeftModel` such as disabling and merging adapters |

## Creating a new PEFT model

To perform the adapter injection, use the [inject_adapter_in_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.inject_adapter_in_model) method. This method takes 3 arguments, the PEFT config, the model, and an optional adapter name. You can also attach multiple adapters to the model if you call [inject_adapter_in_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.inject_adapter_in_model) multiple times with different adapter names.

For example, to inject LoRA adapters into the `linear` submodule of the `DummyModel` module:

```python
import torch
from peft import inject_adapter_in_model, LoraConfig

class DummyModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.embedding = torch.nn.Embedding(10, 10)
        self.linear = torch.nn.Linear(10, 10)
        self.lm_head = torch.nn.Linear(10, 10)

    def forward(self, input_ids):
        x = self.embedding(input_ids)
        x = self.linear(x)
        x = self.lm_head(x)
        return x


lora_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.1,
    r=64,
    bias="none",
    target_modules=["linear"],
)

model = DummyModel()
model = inject_adapter_in_model(lora_config, model)

dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]])
dummy_outputs = model(dummy_inputs)
```

Print the model to see that the adapters have been correctly injected.

```bash
DummyModel(
  (embedding): Embedding(10, 10)
  (linear): Linear(
    in_features=10, out_features=10, bias=True
    (lora_dropout): ModuleDict(
      (default): Dropout(p=0.1, inplace=False)
    )
    (lora_A): ModuleDict(
      (default): Linear(in_features=10, out_features=64, bias=False)
    )
    (lora_B): ModuleDict(
      (default): Linear(in_features=64, out_features=10, bias=False)
    )
    (lora_embedding_A): ParameterDict()
    (lora_embedding_B): ParameterDict()
  )
  (lm_head): Linear(in_features=10, out_features=10, bias=True)
)
```

### Injection based on a `state_dict`

Sometimes, it is possible that there is a PEFT adapter checkpoint but the corresponding PEFT config is not known for whatever reason. To inject the PEFT layers for this checkpoint, you would usually have to reverse-engineer the corresponding PEFT config, most notably the `target_modules` argument, based on the `state_dict` from the checkpoint. This can be cumbersome and error prone. To avoid this, it is also possible to call [inject_adapter_in_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.inject_adapter_in_model) and pass the loaded `state_dict` as an argument:

```python
from safetensors.torch import load_file

model = ...
state_dict = load_file(<path-to-safetensors-file>)
lora_config = LoraConfig(...)
model = inject_adapter_in_model(lora_config, model, state_dict=state_dict)
```

In this case, PEFT will use the `state_dict` as reference for which layers to target instead of using the PEFT config. As a user, you don't have to set the exact `target_modules` of the PEFT config for this to work. However, you should still pass a PEFT config of the right type, in this example `LoraConfig`, you can leave the `target_modules` as `None`.

Be aware that this still only creates the uninitialized PEFT layers, the values from the `state_dict` are not used to populate the model weights. To populate the weights, proceed with calling [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict) as described below.

⚠️ Note that if there is a mismatch between what is configured in the PEFT config and what is found in the `state_dict`, PEFT will warn you about this. You can ignore the warning if you know that the PEFT config is not correctly specified.

> [!WARNING]
> If the original PEFT adapters was using `target_parameters` instead of `target_modules`, injecting from a `state_dict` will not work correctly. In this case, it is mandatory to use the correct PEFT config for injection.

## Saving the model

To only save the adapter, use the [get_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model_state_dict) function:

```python
from peft import get_peft_model_state_dict

peft_state_dict = get_peft_model_state_dict(model)
print(peft_state_dict)
```

Otherwise, `model.state_dict()` returns the full state dict of the model.

## Loading the model

After loading the saved `state_dict`, it can be applied using the [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict) function:

```python
from peft import set_peft_model_state_dict

model = DummyModel()
model = inject_adapter_in_model(lora_config, model)
outcome = set_peft_model_state_dict(model, peft_state_dict)
# check that there were no wrong keys
print(outcome.unexpected_keys)
```

If injecting the adapter is slow or you need to load a large number of adapters, you may use an optimization that allows to create an "empty" adapter on meta device and only fills the weights with real weights when the [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict) is called. To do this, pass `low_cpu_mem_usage=True` to both [inject_adapter_in_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.inject_adapter_in_model) and [set_peft_model_state_dict()](/docs/peft/v0.18.0.rc0/en/package_reference/functional#peft.set_peft_model_state_dict).

```python
model = DummyModel()
model = inject_adapter_in_model(lora_config, model, low_cpu_mem_usage=True)

print(model.linear.lora_A["default"].weight.device.type == "meta")  # should be True
set_peft_model_state_dict(model, peft_state_dict, low_cpu_mem_usage=True)
print(model.linear.lora_A["default"].weight.device.type == "cpu")  # should be True
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/low_level_api.md" />

### Custom models
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/custom_models.md

# Custom models

Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is
assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like
[LoRA](../conceptual_guides/lora) - are not restricted to specific model types.

In this guide, we will see how LoRA can be applied to a multilayer perceptron, a computer vision model from the [timm](https://huggingface.co/docs/timm/index) library, or a new 🤗 Transformers architecture.

## Multilayer perceptron

Let's assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition:

```python
from torch import nn


class MLP(nn.Module):
    def __init__(self, num_units_hidden=2000):
        super().__init__()
        self.seq = nn.Sequential(
            nn.Linear(20, num_units_hidden),
            nn.ReLU(),
            nn.Linear(num_units_hidden, num_units_hidden),
            nn.ReLU(),
            nn.Linear(num_units_hidden, 2),
            nn.LogSoftmax(dim=-1),
        )

    def forward(self, X):
        return self.seq(X)
```

This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer.

> [!TIP]
> For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains
> from PEFT, but those gains are in line with more realistic examples.

There are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers
models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers.
To determine the names of the layers to tune:

```python
print([(n, type(m)) for n, m in MLP().named_modules()])
```

This should print:

```
[('', __main__.MLP),
 ('seq', torch.nn.modules.container.Sequential),
 ('seq.0', torch.nn.modules.linear.Linear),
 ('seq.1', torch.nn.modules.activation.ReLU),
 ('seq.2', torch.nn.modules.linear.Linear),
 ('seq.3', torch.nn.modules.activation.ReLU),
 ('seq.4', torch.nn.modules.linear.Linear),
 ('seq.5', torch.nn.modules.activation.LogSoftmax)]
```

Let's say we want to apply LoRA to the input layer and to the hidden layer, those are `'seq.0'` and `'seq.2'`. Moreover,
let's assume we want to update the output layer without LoRA, that would be `'seq.4'`. The corresponding config would
be:

```python
from peft import LoraConfig

config = LoraConfig(
    target_modules=["seq.0", "seq.2"],
    modules_to_save=["seq.4"],
)
```

With that, we can create our PEFT model and check the fraction of parameters trained:

```python
from peft import get_peft_model

model = MLP()
peft_model = get_peft_model(model, config)
peft_model.print_trainable_parameters()
# prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922
```

Finally, we can use any training framework we like, or write our own fit loop, to train the `peft_model`.

For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/multilayer_perceptron/multilayer_perceptron_lora.ipynb).

## timm models

The [timm](https://huggingface.co/docs/timm/index) library contains a large number of pretrained computer vision models.
Those can also be fine-tuned with PEFT. Let's check out how this works in practice.

To start, ensure that timm is installed in the Python environment:

```bash
python -m pip install -U timm
```

Next we load a timm model for an image classification task:

```python
import timm

num_classes = ...
model_id = "timm/poolformer_m36.sail_in1k"
model = timm.create_model(model_id, pretrained=True, num_classes=num_classes)
```

Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since
those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of
those layers, let's look at all the layer names:

```python
print([(n, type(m)) for n, m in model.named_modules()])
```

This will print a very long list, we'll only show the first few:

```
[('', timm.models.metaformer.MetaFormer),
 ('stem', timm.models.metaformer.Stem),
 ('stem.conv', torch.nn.modules.conv.Conv2d),
 ('stem.norm', torch.nn.modules.linear.Identity),
 ('stages', torch.nn.modules.container.Sequential),
 ('stages.0', timm.models.metaformer.MetaFormerStage),
 ('stages.0.downsample', torch.nn.modules.linear.Identity),
 ('stages.0.blocks', torch.nn.modules.container.Sequential),
 ('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock),
 ('stages.0.blocks.0.norm1', timm.layers.norm.GroupNorm1),
 ('stages.0.blocks.0.token_mixer', timm.models.metaformer.Pooling),
 ('stages.0.blocks.0.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
 ('stages.0.blocks.0.drop_path1', torch.nn.modules.linear.Identity),
 ('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale),
 ('stages.0.blocks.0.res_scale1', torch.nn.modules.linear.Identity),
 ('stages.0.blocks.0.norm2', timm.layers.norm.GroupNorm1),
 ('stages.0.blocks.0.mlp', timm.layers.mlp.Mlp),
 ('stages.0.blocks.0.mlp.fc1', torch.nn.modules.conv.Conv2d),
 ('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU),
 ('stages.0.blocks.0.mlp.drop1', torch.nn.modules.dropout.Dropout),
 ('stages.0.blocks.0.mlp.norm', torch.nn.modules.linear.Identity),
 ('stages.0.blocks.0.mlp.fc2', torch.nn.modules.conv.Conv2d),
 ('stages.0.blocks.0.mlp.drop2', torch.nn.modules.dropout.Dropout),
 ('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity),
 ('stages.0.blocks.0.layer_scale2', timm.models.metaformer.Scale),
 ('stages.0.blocks.0.res_scale2', torch.nn.modules.linear.Identity),
 ('stages.0.blocks.1', timm.models.metaformer.MetaFormerBlock),
 ('stages.0.blocks.1.norm1', timm.layers.norm.GroupNorm1),
 ('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling),
 ('stages.0.blocks.1.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
 ...
 ('head.global_pool.flatten', torch.nn.modules.linear.Identity),
 ('head.norm', timm.layers.norm.LayerNorm2d),
 ('head.flatten', torch.nn.modules.flatten.Flatten),
 ('head.drop', torch.nn.modules.linear.Identity),
 ('head.fc', torch.nn.modules.linear.Linear)]
 ]
```

Upon closer inspection, we see that the 2D conv layers have names such as `"stages.0.blocks.0.mlp.fc1"` and
`"stages.0.blocks.0.mlp.fc2"`. How can we match those layer names specifically? You can write a [regular
expressions](https://docs.python.org/3/library/re.html) to match the layer names. For our case, the regex
`r".*\.mlp\.fc\d"` should do the job.

Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is
also updated. Looking at the end of the list printed above, we can see that it's named `'head.fc'`. With that in mind,
here is our LoRA config:

```python
config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"])
```

Then we only need to create the PEFT model by passing our base model and the config to `get_peft_model`:

```python
peft_model = get_peft_model(model, config)
peft_model.print_trainable_parameters()
# prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876
```

This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain.

For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/image_classification/image_classification_timm_peft_lora.ipynb).

## New transformers architectures

When new popular transformers architectures are released, we do our best to quickly add them to PEFT. If you come across a transformers model that is not supported out of the box, don't worry, it will most likely still work if the config is set correctly. Specifically, you have to identify the layers that should be adapted and set them correctly when initializing the corresponding config class, e.g. `LoraConfig`. Here are some tips to help with this.

As a first step, it is a good idea to check the existing models for inspiration. You can find them inside of [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) in the PEFT repository. Often, you'll find a similar architecture that uses the same names. For example, if the new model architecture is a variation of the "mistral" model and you want to apply LoRA, you can see that the entry for "mistral" in `TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING` contains `["q_proj", "v_proj"]`. This tells you that for "mistral" models, the `target_modules` for LoRA should be `["q_proj", "v_proj"]`:

```python
from peft import LoraConfig, get_peft_model

my_mistral_model = ...
config = LoraConfig(
    target_modules=["q_proj", "v_proj"],
    ...,  # other LoRA arguments
)
peft_model = get_peft_model(my_mistral_model, config)
```

If that doesn't help, check the existing modules in your model architecture with the `named_modules` method and try to identify the attention layers, especially the key, query, and value layers. Those will often have names such as `c_attn`, `query`, `q_proj`, etc. The key layer is not always adapted, and ideally, you should check whether including it results in better performance.

Additionally, linear layers are common targets to be adapted (e.g. in [QLoRA paper](https://huggingface.co/papers/2305.14314), authors suggest to adapt them as well). Their names will often contain the strings `fc` or `dense`.

If you want to add a new model to PEFT, please create an entry in [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) and open a pull request on the [repository](https://github.com/huggingface/peft/pulls). Don't forget to update the [README](https://github.com/huggingface/peft#models-support-matrix) as well.

## Verify parameters and layers

You can verify whether you've correctly applied a PEFT method to your model in a few ways.

* Check the fraction of parameters that are trainable with the [print_trainable_parameters()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.print_trainable_parameters) method. If this number is lower or higher than expected, check the model `repr` by printing the model. This shows the names of all the layer types in the model. Ensure that only the intended target layers are replaced by the adapter layers. For example, if LoRA is applied to `nn.Linear` layers, then you should only see `lora.Linear` layers being used.

```py
peft_model.print_trainable_parameters()
```

* Another way you can view the adapted layers is to use the `targeted_module_names` attribute to list the name of each module that was adapted.

```python
print(peft_model.targeted_module_names)
```

## Unsupported module types

Methods like LoRA only work if the target modules are supported by PEFT. For example, it's possible to apply LoRA to `nn.Linear` and `nn.Conv2d` layers, but not, for instance, to `nn.LSTM`. If you find a layer class you want to apply PEFT to is not supported, you can:

 - define a custom mapping to dynamically dispatch custom modules in LoRA
 -  open an [issue](https://github.com/huggingface/peft/issues) and request the feature where maintainers will implement it or guide you on how to implement it yourself if demand for this module type is sufficiently high

### Experimental support for dynamic dispatch of custom modules in LoRA

> [!WARNING]
> This feature is experimental and subject to change, depending on its reception by the community. We will introduce a public and stable API if there is significant demand for it.

PEFT supports an experimental API for custom module types for LoRA. Let's assume you have a LoRA implementation for LSTMs. Normally, you would not be able to tell PEFT to use it, even if it would theoretically work with PEFT. However, this is possible with dynamic dispatch of custom layers.

The experimental API currently looks like this:

```python
class MyLoraLSTMLayer:
    ...

base_model = ...  # load the base model that uses LSTMs

# add the LSTM layer names to target_modules
config = LoraConfig(..., target_modules=["lstm"])
# define a mapping from base layer type to LoRA layer type
custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
# register the new mapping
config._register_custom_module(custom_module_mapping)
# after registration, create the PEFT model
peft_model = get_peft_model(base_model, config)
# do training
```

> [!TIP]
> When you call [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model), you will see a warning because PEFT does not recognize the targeted module type. In this case, you can ignore this warning.

By supplying a custom mapping, PEFT first checks the base model's layers against the custom mapping and dispatches to the custom LoRA layer type if there is a match. If there is no match, PEFT checks the built-in LoRA layer types for a match.

Therefore, this feature can also be used to override existing dispatch logic, e.g. if you want to use your own LoRA layer for `nn.Linear` instead of using the one provided by PEFT.

When creating your custom LoRA module, please follow the same rules as the [existing LoRA modules](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py). Some important constraints to consider:

- The custom module should inherit from `nn.Module` and `peft.tuners.lora.layer.LoraLayer`.
- The `__init__` method of the custom module should have the positional arguments `base_layer` and `adapter_name`. After this, there are additional `**kwargs` that you are free to use or ignore.
- The learnable parameters should be stored in an `nn.ModuleDict` or `nn.ParameterDict`, where the key corresponds to the name of the specific adapter (remember that a model can have more than one adapter at a time).
- The name of these learnable parameter attributes should start with `"lora_"`, e.g. `self.lora_new_param = ...`.
- Some methods are optional, e.g. you only need to implement `merge` and `unmerge` if you want to support weight merging.

Currently, the information about the custom module does not persist when you save the model. When loading the model, you have to register the custom modules again.

```python
# saving works as always and includes the parameters of the custom modules
peft_model.save_pretrained(<model-path>)

# loading the model later:
base_model = ...
# load the LoRA config that you saved earlier
config = LoraConfig.from_pretrained(<model-path>)
# register the custom module again, the same way as the first time
custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
config._register_custom_module(custom_module_mapping)
# pass the config instance to from_pretrained:
peft_model = PeftModel.from_pretrained(model, tmp_path / "lora-custom-module", config=config)
```

If you use this feature and find it useful, or if you encounter problems, let us know by creating an issue or a discussion on GitHub. This allows us to estimate the demand for this feature and add a public API if it is sufficiently high.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/custom_models.md" />

### Troubleshooting
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/troubleshooting.md

# Troubleshooting

If you encounter any issue when using PEFT, please check the following list of common issues and their solutions.

## Examples don't work

Examples often rely on the most recent package versions, so please ensure they're up-to-date. In particular, check the following package versions:

- `peft`
- `transformers`
- `accelerate`
- `torch`

In general, you can update the package version by running this command inside your Python environment:

```bash
python -m pip install -U <package_name>
```

Installing PEFT from source is useful for keeping up with the latest developments:

```bash
python -m pip install git+https://github.com/huggingface/peft
```

## Dtype-related issues

### ValueError: Attempting to unscale FP16 gradients

This error probably occurred because the model was loaded with `dtype=torch.float16` and then used in an automatic mixed precision (AMP) context, e.g. by setting `fp16=True` in the [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) class from 🤗 Transformers. The reason is that when using AMP, trainable weights should never use fp16. To make this work without loading the whole model in fp32, add the following to your code:

```python
peft_model = get_peft_model(...)

# add this:
for param in model.parameters():
    if param.requires_grad:
        param.data = param.data.float()

# proceed as usual
trainer = Trainer(model=peft_model, fp16=True, ...)
trainer.train()
```

Alternatively, you can use the [cast_mixed_precision_params()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.cast_mixed_precision_params) function to correctly cast the weights:

```python
from peft import cast_mixed_precision_params

peft_model = get_peft_model(...)
cast_mixed_precision_params(peft_model, dtype=torch.float16)

# proceed as usual
trainer = Trainer(model=peft_model, fp16=True, ...)
trainer.train()
```

> [!TIP]
> Starting from PEFT version v0.12.0, PEFT automatically promotes the dtype of adapter weights from `torch.float16` and `torch.bfloat16` to `torch.float32` where appropriate. To _prevent_ this behavior, you can pass `autocast_adapter_dtype=False` to [~get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model), to [from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.from_pretrained), and to [load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter).

### Selecting the dtype of the adapter

Most PEFT methods, like LoRA, work by adding trainable adapter weights. By default, those weights are stored in float32 dtype (fp32), i.e. at a relatively high precision. Therefore, even if the base model is loaded in float16 (fp16) or bfloat16 (bf16), the adapter weights are float32. When the adapter results are calculated during the forward pass, the input will typically be in the dtype of the base model, thus it will be upcast to float32 if necessary, then cast back to the original dtype.

If you prefer to have the adapter weights in the lower precision of the base model, i.e. in float16 or bfloat16, you can pass `autocast_adapter_dtype=False` when creating the model ([~get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model)) or loading the model ([from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.from_pretrained)). There are some advantages and disadvantages to this:

Advantages of half precision adapter:
- computation slightly faster
- slightly less memory
- smaller file size of checkpoint (half the size)

Disadvantages of half precision adapter:
- slightly worse loss
- higher risk of overflow or underflow

Note that for most use cases, overall runtime and memory cost will be determined by the size of the base model and by the dataset, while the dtype of the PEFT adapter will only have a small impact.

## Bad results from a loaded PEFT model

There can be several reasons for getting a poor result from a loaded PEFT model which are listed below. If you're still unable to troubleshoot the problem, see if anyone else had a similar [issue](https://github.com/huggingface/peft/issues) on GitHub, and if you can't find any, open a new issue.

When opening an issue, it helps a lot if you provide a minimal code example that reproduces the issue. Also, please report if the loaded model performs at the same level as the model did before fine-tuning, if it performs at a random level, or if it is only slightly worse than expected. This information helps us identify the problem more quickly.

### Random deviations

If your model outputs are not exactly the same as previous runs, there could be an issue with random elements. For example:

1. please ensure it is in `.eval()` mode, which is important, for instance, if the model uses dropout
2. if you use [generate](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) on a language model, there could be random sampling, so obtaining the same result requires setting a random seed
3. if you used quantization and merged the weights, small deviations are expected due to rounding errors

### Incorrectly loaded model

Please ensure that you load the model correctly. A common error is trying to load a _trained_ model with [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) which is incorrect. Instead, the loading code should look like this:

```python
from peft import PeftModel, PeftConfig

base_model = ...  # to load the base model, use the same code as when you trained it
config = PeftConfig.from_pretrained(peft_model_id)
peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
```

### Randomly initialized layers

For some tasks, it is important to correctly configure `modules_to_save` in the config to account for randomly initialized layers. 

As an example, this is necessary if you use LoRA to fine-tune a language model for sequence classification because 🤗 Transformers adds a randomly initialized classification head on top of the model. If you do not add this layer to `modules_to_save`, the classification head won't be saved. The next time you load the model, you'll get a _different_ randomly initialized classification head, resulting in completely different results.

PEFT tries to correctly guess the `modules_to_save` if you provide the `task_type` argument in the config. This should work for transformers models that follow the standard naming scheme. It is always a good idea to double check though because we can't guarantee all models follow the naming scheme.

When you load a transformers model that has randomly initialized layers, you should see a warning along the lines of:

```
Some weights of <MODEL> were not initialized from the model checkpoint at <ID> and are newly initialized: [<LAYER_NAMES>].
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

The mentioned layers should be added to `modules_to_save` in the config to avoid the described problem.

> [!TIP]
> As an example, when loading a model that is using the DeBERTa architecture for sequence classification, you'll see a warning that the following weights are newly initialized: `['classifier.bias', 'classifier.weight', 'pooler.dense.bias', 'pooler.dense.weight']`. From this, it follows that the `classifier` and `pooler` layers should be added to: `modules_to_save=["classifier", "pooler"]`.

### Extending the vocabulary

For many language fine-tuning tasks, extending the model's vocabulary is necessary since new tokens are being introduced. This requires extending the embedding layer to account for the new tokens and, depending on the fine-tuning method, also storing the embedding layer in addition to the adapter weights when saving the adapter. There are a few ways of achieving this ordered by parameter effectiveness:

- [trainable tokens](../package_reference/trainable_tokens), train only the specified tokens, optionally store only the updated values
- training an adapter on the embedding matrix, optionally store only the updated values
- full-finetuning of the embedding layer

#### Using trainable tokens

Let's start with trainable tokens, in this case its [LoRA integration](../developer_guides/lora#efficiently-train-tokens-alongside-lora).  If you're interested in only training the new embeddings and nothing else, refer to the [standalone documentation](../package_reference/trainable_tokens).

To enable selective token training of the embedding layer, you'll need to supply the token ids of your newly added tokens via the `trainable_token_indices` parameter.  Optionally you can specify which layer to target if there is more than one embedding layer. For a Mistral model this could look like this:

```python
new_tokens = ['<think>', '</think>']
tokenizer.add_tokens(new_tokens)
base_model.resize_token_embeddings(len(tokenizer))

lora_config = LoraConfig(
    ...,
    trainable_token_indices={'embed_tokens': tokenizer.convert_tokens_to_ids(new_tokens)},
)
```

If your model uses tied weights (such as the `lm_head`), trainable tokens will try to resolve those and keep them updated as well, so in that case there should be no need for adding `modules_to_save=["lm_head"]`. This only works if the model uses the Transformers convention for tying weights.

Saving the model with `model.save_pretrained` may save the full embedding matrix instead of
only the difference as a precaution because the embedding matrix was resized. To save space you can disable this behavior by setting `save_embedding_layers=False` when calling `save_pretrained`. This is safe to do as long as you don't modify the embedding matrix through other means as well, as such changes will be not tracked by trainable tokens.

#### Using an adapter, e.g. LoRA

Prepare the embedding layer by adding it to the `target_modules` of your adapter config. For example, the Mistral config could look like this:

```python
config = LoraConfig(..., target_modules=["embed_tokens", "lm_head", "q_proj", "v_proj"])
```

Once added to `target_modules`, PEFT automatically stores the embedding layer when saving the adapter if the model has the `get_input_embeddings` and `get_output_embeddings`. This is generally the case for Transformers models.

If the model's embedding layer doesn't follow the Transformer's naming scheme but nevertheless implements `get_input_embeddings`, you can still save it by manually passing `save_embedding_layers=True` when saving the adapter:

```python
model = get_peft_model(...)
# train the model
model.save_pretrained("my_adapter", save_embedding_layers=True)
```

For inference, load the base model first and resize it the same way you did before you trained the model. After you've resized the base model, you can load the PEFT checkpoint.

For a complete example, please check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb).

#### Full fine-tuning

Full fine-tuning is more costly in terms of VRAM or storage space but if all else fails, you can fall back to this and see if it works for you. Achieve it by adding the name of the embedding layer to `modules_to_save`. Note that you need to add tied layers as well, e.g. `lm_head`. Example for a Mistral model with LoRA:

```python
config = LoraConfig(..., modules_to_save=["embed_tokens", "lm_head"], target_modules=["q_proj", "v_proj"])
```

### Getting a warning about "weights not being initialized from the model checkpoint"

When you load your PEFT model which has been trained on a task (for example, classification), you may get a warning like:

> Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.2-1B and are newly initialized: ['score.weight']. You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Although this looks scary, it is most likely nothing to worry about. This warning comes from Transformers, and it isn't a PEFT specific warning. It lets you know that a randomly initialized classification head (`score`) is attached to the base model, and the head must be trained to produce sensible predictions.

When you get this warning _before_ training the model, PEFT automatically takes care of making the classification head trainable if you correctly passed the `task_type` argument to the PEFT config.

```python
from peft import LoraConfig, TaskType

lora_config = LoraConfig(..., task_type=TaskType.SEQ_CLS)
```

If your classification head does not follow the usual naming conventions from Transformers (which is rare), you have to explicitly tell PEFT the name of the head in `modules_to_save`.

```python
lora_config = LoraConfig(..., modules_to_save=["name-of-classification-head"])
```

To check the name of the classification head, print the model and it should be the last module.

If you get this warning from your inference code, i.e. _after_ training the model, when you load the PEFT model, you always have to load the Transformers model first. Since Transformers does not know that you will load PEFT weights afterwards, it still gives the warning.

As always, it is best practice to ensure the model works correctly for inference by running some validation on it.

### Check layer and model status

Sometimes a PEFT model can end up in a bad state, especially when handling multiple adapters. There can be some confusion around what adapters exist, which one is active, which one is merged, etc. To help investigate this issue, call the [get_layer_status()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.get_layer_status) and the [get_model_status()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.get_model_status) methods. 

The [get_layer_status()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.get_layer_status) method gives you a detailed overview of each targeted layer's active, merged, and available adapters.

```python
>>> from transformers import AutoModel
>>> from peft import get_peft_model, LoraConfig

>>> model_id = "google/flan-t5-small"
>>> model = AutoModel.from_pretrained(model_id)
>>> model = get_peft_model(model, LoraConfig())

>>> model.get_layer_status()
[TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.q',
                  module_type='lora.Linear',
                  enabled=True,
                  active_adapters=['default'],
                  merged_adapters=[],
                  requires_grad={'default': True},
                  available_adapters=['default']),
 TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.v',
                  module_type='lora.Linear',
                  enabled=True,
                  active_adapters=['default'],
                  merged_adapters=[],
                  requires_grad={'default': True},
                  available_adapters=['default']),
...]

>>> model.get_model_status()
TunerModelStatus(
    base_model_type='T5Model',
    adapter_model_type='LoraModel',
    peft_types={'default': 'LORA'},
    trainable_params=344064,
    total_params=60855680,
    num_adapter_layers=48,
    enabled=True,
    active_adapters=['default'],
    merged_adapters=[],
    requires_grad={'default': True},
    available_adapters=['default'],
)
```

In the model state output, you should look out for entries that say `"irregular"`. This means PEFT detected an inconsistent state in the model. For instance, if `merged_adapters="irregular"`, it means that for at least one adapter, it was merged on some target modules but not on others. The inference results will most likely be incorrect as a result.

The best way to resolve this issue is to reload the whole model and adapter checkpoint(s). Ensure that you don't perform any incorrect operations on the model, e.g. manually merging adapters on some modules but not others.

Convert the layer status into a pandas `DataFrame` for an easier visual inspection.

```python
from dataclasses import asdict
import pandas as pd

df = pd.DataFrame(asdict(layer) for layer in model.get_layer_status())
```

It is possible to get this information for non-PEFT models if they are using PEFT layers under the hood, but some information like the `base_model_type` or the `peft_types` cannot be determined in that case. As an example, you can call this on a [diffusers](https://huggingface.co/docs/diffusers/index) model like so:

```python
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> from peft import get_model_status, get_layer_status

>>> path = "runwayml/stable-diffusion-v1-5"
>>> lora_id = "takuma104/lora-test-text-encoder-lora-target"
>>> pipe = StableDiffusionPipeline.from_pretrained(path, dtype=torch.float16)
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-1")
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-2")
>>> pipe.set_lora_device(["adapter-2"], "cuda")
>>> get_layer_status(pipe.text_encoder)
[TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.k_proj',
                  module_type='lora.Linear',
                  enabled=True,
                  active_adapters=['adapter-2'],
                  merged_adapters=[],
                  requires_grad={'adapter-1': False, 'adapter-2': True},
                  available_adapters=['adapter-1', 'adapter-2'],
                  devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
 TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.v_proj',
                  module_type='lora.Linear',
                  enabled=True,
                  active_adapters=['adapter-2'],
                  merged_adapters=[],
                  requires_grad={'adapter-1': False, 'adapter-2': True},
                  devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
...]

>>> get_model_status(pipe.unet)
TunerModelStatus(
    base_model_type='other',
    adapter_model_type='None',
    peft_types={},
    trainable_params=797184,
    total_params=861115332,
    num_adapter_layers=128,
    enabled=True,
    active_adapters=['adapter-2'],
    merged_adapters=[],
    requires_grad={'adapter-1': False, 'adapter-2': True},
    available_adapters=['adapter-1', 'adapter-2'],
    devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']},
)
```

## Speed

### Loading adapter weights is slow

Loading adapters like LoRA weights should generally be fast compared to loading the base model. However, there can be use cases where the adapter weights are quite large or where users need to load a large number of adapters -- the loading time can add up in this case. The reason for this is that the adapter weights are first initialized and then overridden by the loaded weights, which is wasteful. To speed up the loading time, you can pass the `low_cpu_mem_usage=True` argument to [from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.from_pretrained) and [load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter).

> [!TIP]
> If this option works well across different use cases, it may become the default for adapter loading in the future.


## Reproducibility

### Models using batch norm

When loading a trained PEFT model where the base model uses batch norm (e.g. `torch.nn.BatchNorm1d` or `torch.nn.BatchNorm2d`), you may find that you cannot reproduce the exact same outputs. This is because the batch norm layers keep track of running stats during training, but these stats are not part of the PEFT checkpoint. Therefore, when you load the PEFT model, the running stats of the base model will be used (i.e. from before training with PEFT).

Depending on your use case, this may not be a big deal. If, however, you need your outputs to be 100% reproducible, you can achieve this by adding the batch norm layers to `modules_to_save`. Below is an example of this using resnet and LoRA. Notice that we set `modules_to_save=["classifier", "normalization"]`. We need the `"classifier"` argument because our task is image classification, and we add the `"normalization"` argument to ensure that the batch norm layers are saved in the PEFT checkpoint.

```python
from transformers import AutoModelForImageClassification
from peft import LoraConfig, get_peft_model

model_id = "microsoft/resnet-18"
base_model = AutoModelForImageClassification.from_pretrained(self.model_id)
config = LoraConfig(
    target_modules=["convolution"],
    modules_to_save=["classifier", "normalization"],
),
```

Depending on the type of model you use, the batch norm layers could have different names than `"normalization"`, so please ensure that the name matches your model architecture.

## Version mismatch

### Error while loading the config because of an unexpected keyword argument

When you encounter an error like the one shown below, it means the adapter you're trying to load was trained with a more recent version of PEFT than the version you have installed on your system.

```
TypeError: LoraConfig.__init__() got an unexpected keyword argument <argument-name>
```

The best way to resolve this issue is to install the latest PEFT version:

```sh
python -m pip install -U PEFT
```

If the adapter was trained from a source install of PEFT (an unreleased version of PEFT), then you also need to install PEFT from source.

```sh
python -m pip install -U git+https://github.com/huggingface/peft.git
```

If it is not possible for you to upgrade PEFT, there is a workaround you can try.

Assume the error message says that the unknown keyword argument is named `foobar`. Search inside the `adapter_config.json` of this PEFT adapter for the `foobar` entry and delete it from the file. Then save the file and try loading the model again.

This solution works most of the time. As long as it is the default value for `foobar`, it can be ignored. However, when it is set to some other value, you will get incorrect results. Upgrading PEFT is the recommended solution.

## Adapter handling

### Using multiple adapters at the same time

PEFT allows you to create more than one adapter on the same model. This can be useful in many situations. For example, for inference, you may want to serve two fine-tuned models from the same base model instead of loading the base model once for each fine-tuned model, which would cost more memory. However, multiple adapters can be activated at the same time. This way, the model may leverage the learnings from all those adapters at the same time. As an example, if you have a diffusion model, you may want to use one LoRA adapter to change the style and a different one to change the subject.

Activating multiple adapters at the same time is generally possible on all PEFT methods (LoRA, LoHa, IA³, etc.) except for prompt learning methods (p-tuning, prefix tuning, etc.). The following example illustrates how to achieve this:

```python
from transformers import AutoModelForCausalLM
from peft import PeftModel

model_id = ...
base_model = AutoModelForCausalLM.from_pretrained(model_id)
model = PeftModel.from_pretrained(base_model, lora_path_0)  # default adapter_name is 'default'
model.load_adapter(lora_path_1, adapter_name="other")
# the 'other' adapter was loaded but it's not active yet, so to activate both adapters:
model.base_model.set_adapter(["default", "other"])
```

> [!TIP]
> In the example above, you can see that we need to call `model.base_model.set_adapter(["default", "other"])`. Why can we not call `model.set_adapter(["default", "other"])`? This is unfortunately not possible because, as explained earlier, some PEFT methods don't support activating more than one adapter at a time.

It is also possible to train two adapters at the same time, but you should be careful to ensure that the weights of both adapters are known to the optimizer. Otherwise, only one adapter will receive updates.

```python
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model

model_id = ...
base_model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config_0 = LoraConfig(...)
lora_config_1 = LoraConfig(...)
model = get_peft_model(base_model, lora_config_0)
model.add_adapter(adapter_name="other", peft_config=lora_config_1)
```

If we would now call:

```python
from transformers import Trainer

trainer = Trainer(model=model,  ...)
trainer.train()
```

or

```python
optimizer = torch.optim.AdamW([param for param in model.parameters() if param.requires_grad], ...)
```

then the second LoRA adapter (`"other"`) would not be trained. This is because it is inactive at this moment, which means the `requires_grad` attribute on its parameters is set to `False` and the optimizer will ignore it. Therefore, make sure to activate all adapters that should be trained _before_ initializing the optimizer:

```python
# activate all adapters
model.base_model.set_adapter(["default", "other"])
trainer = Trainer(model=model,  ...)
trainer.train()
```

> [!TIP]
> This section deals with using multiple adapters _of the same type_ on the same model, for example, using multiple LoRA adapters at the same time. It does not apply to using _different types_ of adapters on the same model, for example one LoRA adapter and one LoHa adapter. For this, please check [`PeftMixedModel`](https://huggingface.co/docs/peft/developer_guides/mixed_models).


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/troubleshooting.md" />

### Mixed adapter types
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/mixed_models.md

# Mixed adapter types

Normally, it isn't possible to mix different adapter types in 🤗 PEFT. You can create a PEFT model with two different LoRA adapters (which can have different config options), but it is not possible to combine a LoRA and LoHa adapter. With [PeftMixedModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftMixedModel) however, this works as long as the adapter types are compatible. The main purpose of allowing mixed adapter types is to combine trained adapters for inference. While it is possible to train a mixed adapter model, this has not been tested and is not recommended.

To load different adapter types into a PEFT model, use [PeftMixedModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftMixedModel) instead of [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel):

```py
from peft import PeftMixedModel

base_model = ...  # load the base model, e.g. from transformers
# load first adapter, which will be called "default"
peft_model = PeftMixedModel.from_pretrained(base_model, <path_to_adapter1>)
peft_model.load_adapter(<path_to_adapter2>, adapter_name="other")
peft_model.set_adapter(["default", "other"])
```

The [set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftMixedModel.set_adapter) method is necessary to activate both adapters, otherwise only the first adapter would be active. You can keep adding more adapters by calling [add_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.add_adapter) repeatedly.

[PeftMixedModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftMixedModel) does not support saving and loading mixed adapters. The adapters should already be trained, and loading the model requires a script to be run each time.

## Tips

- Not all adapter types can be combined. See [`peft.tuners.mixed.COMPATIBLE_TUNER_TYPES`](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/tuners/mixed/model.py#L35) for a list of compatible types. An error will be raised if you try to combine incompatible adapter types.
- It is possible to mix multiple adapters of the same type which can be useful for combining adapters with very different configs.
- If you want to combine a lot of different adapters, the most performant way to do it is to consecutively add the same adapter types. For example, add LoRA1, LoRA2, LoHa1, LoHa2 in this order, instead of LoRA1, LoHa1, LoRA2, and LoHa2. While the order can affect the output, there is no inherently *best* order, so it is best to choose the fastest one.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/mixed_models.md" />

### Contribute to PEFT
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/contributing.md

# Contribute to PEFT

We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible.

## Installation

For code contributions to PEFT, you should choose the ["source"](../install#source) installation method.

If you are new to creating a pull request, follow the [Creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) guide by GitHub.

## Tests and code quality checks

Regardless of the contribution type (unless it’s only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesn’t break anything and follows the project standards.

We provide a Makefile to execute the necessary tests. Run the code below for the unit test:

```sh
make test
```

Run one of the following to either only check or check and fix code quality and style:

```sh
make quality  # just check
make style  # check and fix
```

You can also set up [`pre-commit`](https://pre-commit.com/) to run these fixes
automatically as Git commit hooks.

```bash
$ pip install pre-commit
$ pre-commit install
```

Running all the tests can take a while, so during development it can be more efficient to only [run tests specific to your change](https://docs.pytest.org/en/6.2.x/usage.html#specifying-tests-selecting-tests), e.g. via:

```sh
pytest tests/<test-file-name> -k <name-of-test>
```

This should finish much quicker and allow for faster iteration.

If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at [tests/test_gpu_examples.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_gpu_examples.py) and [tests/test_common_gpu.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_common_gpu.py) to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the `--regression` flag to trigger regression tests.

It can happen that while you’re working on your PR, the underlying code base changes due to other changes being merged. If that happens – especially when there is a merge conflict – please update your branch with the latest changes. This can be a merge or a rebase, and we'll squash and merge the PR once it’s ready. If possible, avoid force pushes to make reviews easier.

## PR description

When opening a PR, please provide a nice description of the change you're proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project.

If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didn’t work, it’s a good indication that a code comment is needed.

## Bugfixes

Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., “Resolves #12345”).

Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future.

## Add a new fine-tuning method

New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps.

1. Before you start to implement the new method, please open a [GitHub issue](https://github.com/huggingface/peft/issues) with your proposal. This way, the maintainers can give you some early feedback.
2. Please add a link to the source (usually a paper) of the method. The paper should be in a final state to avoid changing requirements during development (e.g. due to reviewer feedback).
3. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but don’t overdo it).
4. Ideally, in addition to the implementation of the new method, there should also be
   - [examples](https://github.com/huggingface/peft/tree/main/examples) (notebooks, scripts)
   - [documentation](https://github.com/huggingface/peft/tree/main/docs/source)
   - [extensive test suite](https://github.com/huggingface/peft/tree/main/tests) that proves the method correctly integrates with PEFT
   - [experimental setup](https://github.com/huggingface/peft/tree/main/method_comparison#creating-new-experiments) to run benchmarks
5. Once you have something that seems to be working, don’t hesitate to create a draft PR even if it’s not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way.

## Add other features

It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it.

New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature.

Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/contributing.md" />

### PEFT checkpoint format
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/checkpoint.md

# PEFT checkpoint format

This document describes how PEFT's checkpoint files are structured and how to convert between the PEFT format and other formats.

## PEFT files

PEFT (parameter-efficient fine-tuning) methods only update a small subset of a model's parameters rather than all of them. This is nice because checkpoint files can generally be much smaller than the original model files and are easier to store and share. However, this also means that to load a PEFT model, you need to have the original model available as well.

When you call [save_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.save_pretrained) on a PEFT model, the PEFT model saves three files, described below:

1. `adapter_model.safetensors` or `adapter_model.bin`

By default, the model is saved in the `safetensors` format, a secure alternative to the `bin` format, which is known to be susceptible to [security vulnerabilities](https://huggingface.co/docs/hub/security-pickle) because it uses the pickle utility under the hood. Both formats store the same `state_dict` though, and are interchangeable.

The `state_dict` only contains the parameters of the adapter module, not the base model. To illustrate the difference in size, a normal BERT model requires ~420MB of disk space, whereas an IA³ adapter on top of this BERT model only requires ~260KB.

2. `adapter_config.json`

The `adapter_config.json` file contains the configuration of the adapter module, which is necessary to load the model. Below is an example of an `adapter_config.json` for an IA³ adapter with standard settings applied to a BERT model:

```json
{
  "auto_mapping": {
    "base_model_class": "BertModel",
    "parent_library": "transformers.models.bert.modeling_bert"
  },
  "base_model_name_or_path": "bert-base-uncased",
  "fan_in_fan_out": false,
  "feedforward_modules": [
    "output.dense"
  ],
  "inference_mode": true,
  "init_ia3_weights": true,
  "modules_to_save": null,
  "peft_type": "IA3",
  "revision": null,
  "target_modules": [
    "key",
    "value",
    "output.dense"
  ],
  "task_type": null
}
```

The configuration file contains:

- the adapter module type stored, `"peft_type": "IA3"`
- information about the base model like `"base_model_name_or_path": "bert-base-uncased"`
- the revision of the model (if any), `"revision": null`

If the base model is not a pretrained Transformers model, the latter two entries will be `null`. Other than that, the settings are all related to the specific IA³ adapter that was used to fine-tune the model.

3. `README.md`

The generated `README.md` is the model card of a PEFT model and contains a few pre-filled entries. The intent of this is to make it easier to share the model with others and to provide some basic information about the model. This file is not needed to load the model.

## Convert to PEFT format

When converting from another format to the PEFT format, we require both the `adapter_model.safetensors` (or `adapter_model.bin`) file and the `adapter_config.json` file.

### adapter_model

For the model weights, it is important to use the correct mapping from parameter name to value for PEFT to load the file. Getting this mapping right is an exercise in checking the implementation details, as there is no generally agreed upon format for PEFT adapters.

Fortunately, figuring out this mapping is not overly complicated for common base cases. Let's look at a concrete example, the [`LoraLayer`](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py):

```python
# showing only part of the code

class LoraLayer(BaseTunerLayer):
    # All names of layers that may contain (trainable) adapter weights
    adapter_layer_names = ("lora_A", "lora_B", "lora_embedding_A", "lora_embedding_B")
    # All names of other parameters that may contain adapter-related parameters
    other_param_names = ("r", "lora_alpha", "scaling", "lora_dropout")

    def __init__(self, base_layer: nn.Module, **kwargs) -> None:
        self.base_layer = base_layer
        self.r = {}
        self.lora_alpha = {}
        self.scaling = {}
        self.lora_dropout = nn.ModuleDict({})
        self.lora_A = nn.ModuleDict({})
        self.lora_B = nn.ModuleDict({})
        # For Embedding layer
        self.lora_embedding_A = nn.ParameterDict({})
        self.lora_embedding_B = nn.ParameterDict({})
        # Mark the weight as unmerged
        self._disable_adapters = False
        self.merged_adapters = []
        self.use_dora: dict[str, bool] = {}
        self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None  # for DoRA
        self._caches: dict[str, Any] = {}
        self.kwargs = kwargs
```

In the `__init__` code used by all `LoraLayer` classes in PEFT, there are a bunch of parameters used to initialize the model, but only a few are relevant for the checkpoint file: `lora_A`, `lora_B`, `lora_embedding_A`, and `lora_embedding_B`. These parameters are listed in the class attribute `adapter_layer_names` and contain the learnable parameters, so they must be included in the checkpoint file. All the other parameters, like the rank `r`, are derived from the `adapter_config.json` and must be included there (unless the default value is used).

Let's check the `state_dict` of a PEFT LoRA model applied to BERT. When printing the first five keys using the default LoRA settings (the remaining keys are the same, just with different layer numbers), we get:

- `base_model.model.encoder.layer.0.attention.self.query.lora_A.weight` 
- `base_model.model.encoder.layer.0.attention.self.query.lora_B.weight` 
- `base_model.model.encoder.layer.0.attention.self.value.lora_A.weight` 
- `base_model.model.encoder.layer.0.attention.self.value.lora_B.weight` 
- `base_model.model.encoder.layer.1.attention.self.query.lora_A.weight`
- etc.

Let's break this down:

- By default, for BERT models, LoRA is applied to the `query` and `value` layers of the attention module. This is why you see `attention.self.query` and `attention.self.value` in the key names for each layer.
- LoRA decomposes the weights into two low-rank matrices, `lora_A` and `lora_B`. This is where `lora_A` and `lora_B` come from in the key names.
- These LoRA matrices are implemented as `nn.Linear` layers, so the parameters are stored in the `.weight` attribute (`lora_A.weight`, `lora_B.weight`).
- By default, LoRA isn't applied to BERT's embedding layer, so there are _no entries_ for `lora_A_embedding` and `lora_B_embedding`.
- The keys of the `state_dict` always start with `"base_model.model."`. The reason is that, in PEFT, we wrap the base model inside a tuner-specific model (`LoraModel` in this case), which itself is wrapped in a general PEFT model (`PeftModel`). For this reason, these two prefixes are added to the keys. When converting to the PEFT format, it is required to add these prefixes.

> [!TIP]
> This last point is not true for prefix tuning techniques like prompt tuning. There, the extra embeddings are directly stored in the `state_dict` without any prefixes added to the keys.

When inspecting the parameter names in the loaded model, you might be surprised to find that they look a bit different, e.g. `base_model.model.encoder.layer.0.attention.self.query.lora_A.default.weight`. The difference is the *`.default`* part in the second to last segment. This part exists because PEFT generally allows the addition of multiple adapters at once (using an `nn.ModuleDict` or `nn.ParameterDict` to store them). For example, if you add another adapter called "other", the key for that adapter would be `base_model.model.encoder.layer.0.attention.self.query.lora_A.other.weight`.

When you call [save_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.save_pretrained), the adapter name is stripped from the keys. The reason is that the adapter name is not an important part of the model architecture; it is just an arbitrary name. When loading the adapter, you could choose a totally different name, and the model would still work the same way. This is why the adapter name is not stored in the checkpoint file.

> [!TIP]
> If you call `save_pretrained("some/path")` and the adapter name is not `"default"`, the adapter is stored in a sub-directory with the same name as the adapter. So if the name is "other", it would be stored inside of `some/path/other`.

In some circumstances, deciding which values to add to the checkpoint file can become a bit more complicated. For example, in PEFT, DoRA is implemented as a special case of LoRA. If you want to convert a DoRA model to PEFT, you should create a LoRA checkpoint with extra entries for DoRA. You can see this in the `__init__` of the previous `LoraLayer` code:

```python
self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None  # for DoRA
```

This indicates that there is an optional extra parameter per layer for DoRA.

### adapter_config

All the other information needed to load a PEFT model is contained in the `adapter_config.json` file. Let's check this file for a LoRA model applied to BERT:

```json
{
  "alpha_pattern": {},
  "auto_mapping": {
    "base_model_class": "BertModel",
    "parent_library": "transformers.models.bert.modeling_bert"
  },
  "base_model_name_or_path": "bert-base-uncased",
  "bias": "none",
  "fan_in_fan_out": false,
  "inference_mode": true,
  "init_lora_weights": true,
  "layer_replication": null,
  "layers_pattern": null,
  "layers_to_transform": null,
  "loftq_config": {},
  "lora_alpha": 8,
  "lora_dropout": 0.0,
  "megatron_config": null,
  "megatron_core": "megatron.core",
  "modules_to_save": null,
  "peft_type": "LORA",
  "r": 8,
  "rank_pattern": {},
  "revision": null,
  "target_modules": [
    "query",
    "value"
  ],
  "task_type": null,
  "use_dora": false,
  "use_rslora": false
}
```

This contains a lot of entries, and at first glance, it could feel overwhelming to figure out all the right values to put in there. However, most of the entries are not necessary to load the model. This is either because they use the default values and don't need to be added or because they only affect the initialization of the LoRA weights, which is irrelevant when it comes to loading the model. If you find that you don't know what a specific parameter does, e.g., `"use_rslora",` don't add it, and you should be fine. Also note that as more options are added, this file will get more entries in the future, but it should be backward compatible.

At the minimum, you should include the following entries:

```json
{
  "target_modules": ["query", "value"],
  "peft_type": "LORA"
}
```

However, adding as many entries as possible, like the rank `r` or the `base_model_name_or_path` (if it's a Transformers model) is recommended. This information can help others understand the model better and share it more easily. To check which keys and values are expected, check out the [config.py](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/config.py) file (as an example, this is the config file for LoRA) in the PEFT source code.

## Model storage

In some circumstances, you might want to store the whole PEFT model, including the base weights. This can be necessary if, for instance, the base model is not available to the users trying to load the PEFT model. You can merge the weights first or convert it into a Transformer model.

### Merge the weights

The most straightforward way to store the whole PEFT model is to merge the adapter weights into the base weights:

```python
merged_model = model.merge_and_unload()
merged_model.save_pretrained(...)
```

There are some disadvantages to this approach, though:

- Once [merge_and_unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_and_unload) is called, you get a basic model without any PEFT-specific functionality. This means you can't use any of the PEFT-specific methods anymore.
- You cannot unmerge the weights, load multiple adapters at once, disable the adapter, etc.
- Not all PEFT methods support merging weights.
- Some PEFT methods may generally allow merging, but not with specific settings (e.g. when using certain quantization techniques).
- The whole model will be much larger than the PEFT model, as it will contain all the base weights as well.

But inference with a merged model should be a bit faster.

### Convert to a Transformers model

Another way to save the whole model, assuming the base model is a Transformers model, is to use this hacky approach to directly insert the PEFT weights into the base model and save it, which only works if you "trick" Transformers into believing the PEFT model is not a PEFT model. This only works with LoRA because other adapters are not implemented in Transformers.

```python
model = ...  # the PEFT model
...
# after you finish training the model, save it in a temporary location
model.save_pretrained(<temp_location>)
# now load this model directly into a transformers model, without the PEFT wrapper
# the PEFT weights are directly injected into the base model
model_loaded = AutoModel.from_pretrained(<temp_location>)
# now make the loaded model believe that it is _not_ a PEFT model
model_loaded._hf_peft_config_loaded = False
# now when we save it, it will save the whole model
model_loaded.save_pretrained(<final_location>)
# or upload to Hugging Face Hub
model_loaded.push_to_hub(<final_location>)
```



<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/checkpoint.md" />

### Quantization
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/quantization.md

# Quantization

Quantization represents data with fewer bits, making it a useful technique for reducing memory-usage and accelerating inference especially when it comes to large language models (LLMs). There are several ways to quantize a model including:

* optimizing which model weights are quantized with the [AWQ](https://hf.co/papers/2306.00978) algorithm
* independently quantizing each row of a weight matrix with the [GPTQ](https://hf.co/papers/2210.17323) algorithm
* quantizing to 8-bit and 4-bit precision with the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library
* quantizing to as low as 2-bit precision with the [AQLM](https://huggingface.co/papers/2401.06118) algorithm

However, after a model is quantized it isn't typically further trained for downstream tasks because training can be unstable due to the lower precision of the weights and activations. But since PEFT methods only add *extra* trainable parameters, this allows you to train a quantized model with a PEFT adapter on top! Combining quantization with PEFT can be a good strategy for training even the largest models on a single GPU. For example, [QLoRA](https://hf.co/papers/2305.14314) is a method that quantizes a model to 4-bits and then trains it with LoRA. This method allows you to finetune a 65B parameter model on a single 48GB GPU!

In this guide, you'll see how to quantize a model to 4-bits and train it with LoRA.

## Quantize a model

[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is a quantization library with a Transformers integration. With this integration, you can quantize a model to 8 or 4-bits and enable many other options by configuring the [BitsAndBytesConfig](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/quantization#transformers.BitsAndBytesConfig) class. For example, you can:

* set `load_in_4bit=True` to quantize the model to 4-bits when you load it
* set `bnb_4bit_quant_type="nf4"` to use a special 4-bit data type for weights initialized from a normal distribution
* set `bnb_4bit_use_double_quant=True` to use a nested quantization scheme to quantize the already quantized weights
* set `bnb_4bit_compute_dtype=torch.bfloat16` to use bfloat16 for faster computation

```py
import torch
from transformers import BitsAndBytesConfig

config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
)
```

Pass the `config` to the [from_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/model_doc/auto#transformers.AutoModelForCausalLM.from_pretrained) method.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
```

Next, you should call the [prepare_model_for_kbit_training()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.prepare_model_for_kbit_training) function to preprocess the quantized model for training.

```py
from peft import prepare_model_for_kbit_training

model = prepare_model_for_kbit_training(model)
```

Now that the quantized model is ready, let's set up a configuration.

## LoraConfig

Create a [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) with the following parameters (or choose your own):

```py
from peft import LoraConfig

config = LoraConfig(
    r=16,
    lora_alpha=8,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)
```

Then use the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) from the quantized model and configuration.

```py
from peft import get_peft_model

model = get_peft_model(model, config)
```

You're all set for training with whichever training method you prefer!

### LoftQ initialization

[LoftQ](https://hf.co/papers/2310.08659) initializes LoRA weights such that the quantization error is minimized, and it can improve performance when training quantized models. To get started, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).

In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.

### QLoRA-style training

QLoRA adds trainable weights to all the linear layers in the transformer architecture. Since the attribute names for these linear layers can vary across architectures, set `target_modules` to `"all-linear"` to add LoRA to all the linear layers:

```py
config = LoraConfig(target_modules="all-linear", ...)
```

## GPTQ quantization

You can learn more about gptq based `[2, 3, 4, 8]` bits quantization at [GPTQModel](https://github.com/ModelCloud/GPTQModel) and the Transformers [GPTQ](https://huggingface.co/docs/transformers/quantization/gptq) doc. Post-quant training, PEFT can use both [GPTQModel](https://github.com/ModelCloud/GPTQModel) or [AutoGPTQ](https://github.com/autogptq/autogptq) libraries, but we recommend GPTQModel because AutoGPTQ will be deprecated in a future release. 

```bash
# gptqmodel install
pip install gptqmodel --no-build-isolation
```

```py
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig

model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)

gptq_config = GPTQConfig(bits=4, group_size=128, dataset="wikitext2", tokenizer=tokenizer)

quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)

# save quantized model
quantized_model.save_pretrained("./opt-125m-gptq")
tokenizer.save_pretrained("./opt-125m-gptq")
```

Once quantized, you can post-train GPTQ models with PEFT APIs.

## AQLM quantization

Additive Quantization of Language Models ([AQLM](https://huggingface.co/papers/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. This allows it to compress models down to as low as 2-bit with considerably low accuracy losses.

Since the AQLM quantization process is computationally expensive, the use of prequantized models is recommended. A partial list of available models can be found in the official aqlm [repository](https://github.com/Vahe1994/AQLM).

The models support LoRA adapter tuning. To tune the quantized model you'll need to install the `aqlm` inference library: `pip install aqlm>=1.0.2`. Finetuned LoRA adapters shall be saved separately, as merging them with AQLM quantized weights is not possible.

```py
quantized_model = AutoModelForCausalLM.from_pretrained(
    "BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf-test-dispatch",
    dtype="auto", device_map="auto", low_cpu_mem_usage=True,
)

peft_config = LoraConfig(...)

quantized_model = get_peft_model(quantized_model, peft_config)
```

You can refer to the [Google Colab](https://colab.research.google.com/drive/12GTp1FCj5_0SnnNQH18h_2XFh9vS_guX?usp=sharing) example for an overview of AQLM+LoRA finetuning.

## EETQ quantization

You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).

```py
import torch
from transformers import EetqConfig

config = EetqConfig("int8")
```

Pass the `config` to the [from_pretrained](https://huggingface.co/docs/transformers/v4.57.1/en/model_doc/auto#transformers.AutoModelForCausalLM.from_pretrained) method.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
```

and create a `LoraConfig` and pass it to `get_peft_model`:

```py
from peft import LoraConfig, get_peft_model

config = LoraConfig(
    r=16,
    lora_alpha=8,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

model = get_peft_model(model, config)
```

## HQQ quantization

The models that are quantized using Half-Quadratic Quantization of Large Machine Learning Models ([HQQ](https://mobiusml.github.io/hqq_blog/)) support LoRA adapter tuning. To tune the quantized model, you'll need to install the `hqq` library with: `pip install hqq`.

```python
from hqq.engine.hf import HQQModelForCausalLM

device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"

quantized_model = HQQModelForCausalLM.from_quantized(save_dir_or_hfhub, device=device)
peft_config = LoraConfig(...)
quantized_model = get_peft_model(quantized_model, peft_config)
```

Or using transformers version that is compatible with HQQ (e.g. by installing it from latest pypi or from source).

```python
from transformers import HqqConfig, AutoModelForCausalLM

quant_config = HqqConfig(nbits=4, group_size=64)
quantized_model = AutoModelForCausalLM.from_pretrained(save_dir_or_hfhub, device_map=device_map, quantization_config=quant_config)
peft_config = LoraConfig(...)
quantized_model = get_peft_model(quantized_model, peft_config)
```

## torchao (PyTorch Architecture Optimization)

PEFT supports models quantized with [torchao](https://github.com/pytorch/ao) ("ao") for int8 quantization.

```python
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM, TorchAoConfig

model_id = ...
quantization_config = TorchAoConfig(quant_type="int8_weight_only")
base_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config)
peft_config = LoraConfig(...)
model = get_peft_model(base_model, peft_config)
```

### Caveats:

- Use the most recent versions of torchao (>= v0.4.0) and transformers (> 4.42).
- Only linear layers are currently supported.
- `quant_type = "int4_weight_only"` is currently not supported.
- `NF4` is not implemented in transformers as of yet and is thus also not supported.
- DoRA only works with `quant_type = "int8_weight_only"` at the moment.
- There is explicit support for torchao when used with LoRA. However, when torchao quantizes a layer, its class does not change, only the type of the underlying tensor. For this reason, PEFT methods other than LoRA will generally also work with torchao, even if not explicitly supported. Be aware, however, that **merging only works correctly with LoRA and with `quant_type = "int8_weight_only"`**. If you use a different PEFT method or dtype, merging will likely result in an error, and even it doesn't, the results will still be incorrect.

## INC quantization

Intel Neural Compressor ([INC](https://github.com/intel/neural-compressor)) enables model quantization for various devices,
including Intel Gaudi accelerators (also known as HPU devices). You can perform LoRA fine-tuning on models that have been
quantized using INC. To use INC with PyTorch models, install the library with: `pip install neural-compressor[pt]`.
Quantizing a model to FP8 precision for HPU devices can be done with the following single-step quantization workflow:

```python
import torch
from neural_compressor.torch.quantization import FP8Config, convert, finalize_calibration, prepare
quant_configs = {
    ...
}
config = FP8Config(**quant_configs)
```

Pass the config to the `prepare` method, run inference to gather calibration stats, and call `finalize_calibration`
and `convert` methods to quantize model to FP8 precision:

```python
model = prepare(model, config)
# Run inference to collect calibration statistics
...
# Finalize calibration and convert the model to FP8 precision
finalize_calibration(model)
model = convert(model)
# Load PEFT LoRA adapter as usual
...
```

An example demonstrating how to load a PEFT LoRA adapter into an INC-quantized FLUX text-to-image model for HPU
devices is provided [here](https://github.com/huggingface/peft/blob/main/examples/stable_diffusion/inc_flux_lora_hpu.py).


### Caveats:

- `merge()` and `unmerge()` methods are currently not supported for INC-quantized models.
- Currently, only **Linear** INC-quantized layers are supported when loading PEFT adapters.

## Other Supported PEFT Methods

Besides LoRA, the following PEFT methods also support quantization:

- **VeRA** (supports bitsandbytes quantization)
- **AdaLoRA** (supports both bitsandbytes and GPTQ quantization)
- **(IA)³** (supports bitsandbytes quantization)

## Next steps

If you're interested in learning more about quantization, the following may be helpful:

* Learn more details about QLoRA and check out some benchmarks on its impact in the [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) blog post.
* Read more about different quantization schemes in the Transformers [Quantization](https://hf.co/docs/transformers/main/quantization) guide.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/quantization.md" />

### Model merging
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/model_merging.md

# Model merging

Training a model for each task can be costly, take up storage space, and the models aren't able to learn new information to improve their performance. Multitask learning can overcome some of these limitations by training a model to learn several tasks, but it is expensive to train and designing a dataset for it is challenging. *Model merging* offers a solution to these challenges by combining multiple pretrained models into one model, giving it the combined abilities of each individual model without any additional training.

PEFT provides several methods for merging models like a linear or SVD combination. This guide focuses on two methods that are more efficient for merging LoRA adapters by eliminating redundant parameters:

* [TIES](https://hf.co/papers/2306.01708) - TrIm, Elect, and Merge (TIES) is a three-step method for merging models. First, redundant parameters are trimmed, then conflicting signs are resolved into an aggregated vector, and finally the parameters whose signs are the same as the aggregate sign are averaged. This method takes into account that some values (redundant and sign disagreement) can degrade performance in the merged model.
* [DARE](https://hf.co/papers/2311.03099) - Drop And REscale is a method that can be used to prepare for other model merging methods like TIES. It works by randomly dropping parameters according to a drop rate and rescaling the remaining parameters. This helps to reduce the number of redundant and potentially interfering parameters among multiple models.

Models are merged with the [add_weighted_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraModel.add_weighted_adapter) method, and the specific model merging method is specified in the `combination_type` parameter.

## Merge method

With TIES and DARE, merging is enabled by setting `combination_type` and `density` to a value of the weights to keep from the individual models. For example, let's merge three finetuned [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) models: [tinyllama_lora_nobots](https://huggingface.co/smangrul/tinyllama_lora_norobots), [tinyllama_lora_sql](https://huggingface.co/smangrul/tinyllama_lora_sql), and [tinyllama_lora_adcopy](https://huggingface.co/smangrul/tinyllama_lora_adcopy).

<Tip warninig={true}>

When you're attempting to merge fully trained models with TIES, you should be aware of any special tokens each model may have added to the embedding layer which are not a part of the original checkpoint's vocabulary. This may cause an issue because each model may have added a special token to the same embedding position. If this is the case, you should use the [resize_token_embeddings](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings) method to avoid merging the special tokens at the same embedding index.

<br>

This shouldn't be an issue if you're only merging LoRA adapters trained from the same base model.

</Tip>

Load a base model and can use the [load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter) method to load and assign each adapter a name:

```py
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

config = PeftConfig.from_pretrained("smangrul/tinyllama_lora_norobots")
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map="auto").eval()
tokenizer = AutoTokenizer.from_pretrained("smangrul/tinyllama_lora_norobots")

model.config.vocab_size = 32005
model.resize_token_embeddings(32005)

model = PeftModel.from_pretrained(model, "smangrul/tinyllama_lora_norobots", adapter_name="norobots")
_ = model.load_adapter("smangrul/tinyllama_lora_sql", adapter_name="sql")
_ = model.load_adapter("smangrul/tinyllama_lora_adcopy", adapter_name="adcopy")
```

Set the adapters, weights, `adapter_name`, `combination_type`, and `density` with the [add_weighted_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraModel.add_weighted_adapter) method.

<hfoptions id="merge-method">
<hfoption id="TIES">

Weight values greater than `1.0` typically produce better results because they preserve the correct scale. A good default starting value for the weights is to set all values to `1.0`.

```py
adapters = ["norobots", "adcopy", "sql"]
weights = [2.0, 1.0, 1.0]
adapter_name = "merge"
density = 0.2
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="ties", density=density)
```

</hfoption>
<hfoption id="DARE">

```py
adapters = ["norobots", "adcopy", "sql"]
weights = [2.0, 0.3, 0.7]
adapter_name = "merge"
density = 0.2
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="dare_ties", density=density)
```

</hfoption>
</hfoptions>

Set the newly merged model as the active model with the [set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.set_adapter) method.

```py
model.set_adapter("merge")
```

Now you can use the merged model as an instruction-tuned model to write ad copy or SQL queries!

<hfoptions id="ties">
<hfoption id="instruct">

```py
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
messages = [
    {"role": "user", "content": "Write an essay about Generative AI."},
]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
```

</hfoption>
<hfoption id="ad copy">

```py
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
messages = [
    {"role": "system", "content": "Create a text ad given the following product and description."},
    {"role": "user", "content": "Product: Sony PS5 PlayStation Console\nDescription: The PS5 console unleashes new gaming possibilities that you never anticipated."},
]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
```

</hfoption>
<hfoption id="SQL">

```py
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"

text = """Table: 2-11365528-2
Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location']
Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic?
SQL Query:"""

inputs = tokenizer(text, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1, eos_token_id=tokenizer("</s>").input_ids[-1])
print(tokenizer.decode(outputs[0]))
```

</hfoption>
</hfoptions>


## Merging (IA)³ Models
The (IA)³ models facilitate linear merging of adapters. To merge adapters in an (IA)³ model, utilize the `add_weighted_adapter` method from the `IA3Model` class. This method is analogous to the `add_weighted_adapter` method used in `LoraModel`, with the key difference being the absence of the `combination_type` parameter. For example, to merge three (IA)³ adapters into a PEFT model, you would proceed as follows:

```py
adapters = ["adapter1", "adapter2", "adapter3"]
weights = [0.4, 0.3, 0.3]
adapter_name = "merge"
model.add_weighted_adapter(adapters, weights, adapter_name)
```

It is recommended that the weights sum to 1.0 to preserve the scale of the model. The merged model can then be set as the active model using the `set_adapter` method:

```py
model.set_adapter("merge")
```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md" />

### LoRA
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/lora.md

# LoRA

LoRA is low-rank decomposition method to reduce the number of trainable parameters which speeds up finetuning large models and uses less memory. In PEFT, using LoRA is as easy as setting up a [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) and wrapping it with [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) to create a trainable [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel).

This guide explores in more detail other options and features for using LoRA.

## Initialization

The initialization of LoRA weights is controlled by the parameter `init_lora_weights` in [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig). By default, PEFT initializes LoRA weights with Kaiming-uniform for weight A and zeros for weight B resulting in an identity transform (same as the reference [implementation](https://github.com/microsoft/LoRA)).

It is also possible to pass `init_lora_weights="gaussian"`. As the name suggests, this initializes weight A with a Gaussian distribution and zeros for weight B (this is how [Diffusers](https://huggingface.co/docs/diffusers/index) initializes LoRA weights).

```py
from peft import LoraConfig

config = LoraConfig(init_lora_weights="gaussian", ...)
```

There is also an option to set `init_lora_weights=False` which is useful for debugging and testing. This should be the only time you use this option. When choosing this option, the LoRA weights are initialized such that they do *not* result in an identity transform.

```py
from peft import LoraConfig

config = LoraConfig(init_lora_weights=False, ...)
```

### PiSSA
[PiSSA](https://huggingface.co/papers/2404.02948) initializes the LoRA adapter using the principal singular values and singular vectors. This straightforward modification allows PiSSA to converge more rapidly than LoRA and ultimately attain superior performance. Moreover, PiSSA reduces the quantization error compared to QLoRA, leading to further enhancements.

Configure the initialization method to "pissa", which may take several minutes to execute SVD on the pre-trained model:
```python
from peft import LoraConfig
config = LoraConfig(init_lora_weights="pissa", ...)
```
Alternatively, execute fast SVD, which takes only a few seconds. The number of iterations determines the trade-off between the error and computation time:
```python
lora_config = LoraConfig(init_lora_weights="pissa_niter_[number of iters]", ...)
```
For detailed instruction on using PiSSA, please follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/pissa_finetuning).

### CorDA

[CorDA](https://huggingface.co/papers/2406.05223) builds task-aware LoRA adapters from weight decomposition oriented by the context of downstream task to learn (instruction-previewed mode, IPM) or world knowledge to maintain (knowledge-preserved mode, KPM).
The KPM not only achieves better performance than LoRA on fine-tuning tasks, but also mitigates the catastrophic forgetting of pre-trained world knowledge.
When preserving pre-trained knowledge is not a concern,
the IPM is favored because it can further accelerate convergence and enhance the fine-tuning performance.

You need to configure the initialization method to "corda", and specify the mode of IPM or KPM and the dataset to collect covariance matrices.

```py
@torch.no_grad()
def run_model():
    # Assume `model` and `dataset` is in context...
    model.eval()
    for batch in dataset:
        model(**batch)


corda_config = CordaConfig(
    corda_method="kpm",
)
lora_config = LoraConfig(
    init_lora_weights="corda",
    corda_config=corda_config,
)
preprocess_corda(model, lora_config, run_model=run_model)
peft_model = get_peft_model(model, lora_config)
```

For detailed instruction on using CorDA, please follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/corda_finetuning).

### OLoRA
[OLoRA](https://huggingface.co/papers/2406.01775) utilizes QR decomposition to initialize the LoRA adapters. OLoRA translates the base weights of the model by a factor of their QR decompositions, i.e., it mutates the weights before performing any training on them. This approach significantly improves stability, accelerates convergence speed, and ultimately achieves superior performance.

You just need to pass a single additional option to use OLoRA:
```python
from peft import LoraConfig
config = LoraConfig(init_lora_weights="olora", ...)
```
For more advanced usage, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/olora_finetuning).

### EVA
[EVA](https://huggingface.co/papers/2410.07170) performs SVD on the input activations of each layer and uses the right-singular vectors to initialize LoRA weights. It is therefore a data-driven initialization scheme. Furthermore EVA adaptively allocates ranks across layers based on their "explained variance ratio" - a metric derived from the SVD analysis.

You can use EVA by setting `init_lora_weights="eva"` and defining [EvaConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.EvaConfig) in [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig):
```python
from peft import LoraConfig, EvaConfig
peft_config = LoraConfig(
    init_lora_weights = "eva",
    eva_config = EvaConfig(rho = 2.0),
    ...
)
```
The parameter `rho` (≥ 1.0) determines how much redistribution is allowed. When `rho=1.0` and `r=16`, LoRA adapters are limited to exactly 16 ranks, preventing any redistribution from occurring. A recommended value for EVA with redistribution is 2.0, meaning the maximum rank allowed for a layer is 2r.

It is recommended to perform EVA initialization on an accelerator(e.g. CUDA GPU, Intel XPU) as it is much faster. To optimize the amount of available memory for EVA, you can use the `low_cpu_mem_usage` flag in [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model):
```python
peft_model = get_peft_model(model, peft_config, low_cpu_mem_usage=True)
```
Then, call [initialize_lora_eva_weights()](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.initialize_lora_eva_weights) to initialize the EVA weights (in most cases the dataloader used for eva initialization can be the same as the one used for finetuning):
```python
initialize_lora_eva_weights(peft_model, dataloader)
```
EVA works out of the box with bitsandbytes. Simply initialize the model with `quantization_config` and call [initialize_lora_eva_weights()](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.initialize_lora_eva_weights) as usual.

> [!TIP]
> For further instructions on using EVA, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/eva_finetuning).

### LoftQ

#### Standard approach

When quantizing the base model for QLoRA training, consider using the [LoftQ initialization](https://huggingface.co/papers/2310.08659), which has been shown to improve performance when training quantized models. The idea is that the LoRA weights are initialized such that the quantization error is minimized. To use LoftQ, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).

In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.

#### A more convenient way

An easier but more limited way to apply LoftQ initialization is to use the convenience function `replace_lora_weights_loftq`. This takes the quantized PEFT model as input and replaces the LoRA weights in-place with their LoftQ-initialized counterparts.

```python
from peft import replace_lora_weights_loftq
from transformers import BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(load_in_4bit=True, ...)
base_model = AutoModelForCausalLM.from_pretrained(..., quantization_config=bnb_config)
# note: don't pass init_lora_weights="loftq" or loftq_config!
lora_config = LoraConfig(task_type="CAUSAL_LM")
peft_model = get_peft_model(base_model, lora_config)
replace_lora_weights_loftq(peft_model)
```

`replace_lora_weights_loftq` also allows you to pass a `callback` argument to give you more control over which layers should be modified or not, which empirically can improve the results quite a lot. To see a more elaborate example of this, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb).

`replace_lora_weights_loftq` implements only one iteration step of LoftQ. This means that only the LoRA weights are updated, instead of iteratively updating LoRA weights and quantized base model weights. This may lead to lower performance but has the advantage that we can use the original quantized weights derived from the base model, instead of having to keep an extra copy of modified quantized weights. Whether this tradeoff is worthwhile depends on the use case.

At the moment, `replace_lora_weights_loftq` has these additional limitations:

- Model files must be stored as a `safetensors` file.
- Only bitsandbytes 4bit quantization is supported.

> [!TIP]
> Learn more about how PEFT works with quantization in the [Quantization](quantization) guide.

### Rank-stabilized LoRA

Another way to initialize [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) is with the [rank-stabilized LoRA (rsLoRA)](https://huggingface.co/papers/2312.03732) method. The LoRA architecture scales each adapter during every forward pass by a fixed scalar which is set at initialization and depends on the rank `r`. The scalar is given by `lora_alpha/r` in the original implementation, but rsLoRA uses `lora_alpha/math.sqrt(r)` which stabilizes the adapters and increases the performance potential from using a higher `r`.

```py
from peft import LoraConfig

config = LoraConfig(use_rslora=True, ...)
```
### Activated LoRA (aLoRA)

Activated LoRA (aLoRA) is a low rank adapter architecture for Causal LMs that allows for reusing existing base model KV cache for more efficient inference. This approach is best suited for inference pipelines which rely on the base model for most tasks/generations, but use aLoRA adapter(s) to perform specialized task(s) within the chain. For example, checking or correcting generated outputs of the base model. In these settings, inference times can be sped up by an order of magnitude or more. For more information on aLoRA and many example use cases, see https://huggingface.co/papers/2504.12397.

This technique scans for the last occurence of an invocation sequence (`alora_invocation_tokens`) in each input (this can be as short as 1 token), and activates the adapter weights on tokens starting with the beginning of the invocation sequence (any inputs after the invocation sequence are also adapted, and all generated tokens will use the adapted weights). Weights on prior tokens are left un-adapted -- making the cache for those tokens interchangeable with base model cache due to the causal attention mask in Causal LMs. Usage is very similar to standard LoRA, with the key difference that this invocation sequence must be specified when the adapter is created:

```py
from peft import LoraConfig

config = LoraConfig(alora_invocation_tokens=alora_invocation_tokens, task_type="CAUSAL_LM", ...)
```

where `alora_invocation_tokens` is a list of integer token ids. Given a desired invocation string, this can be obtained as
```
invocation_string = "placeholder"
alora_invocation_tokens = tokenizer.encode(invocation_string, add_special_tokens=False).
```
where the tokenizer is the tokenizer for the base model. Note that we have `add_special_tokens=False` to avoid adding SOS/EOS tokens in our search string (which will most likely cause failure to find).

**Notes**
* aLoRA is only supported for `task_type=CAUSAL_LM` tasks due to its focus on cache reuse.
* Since the weights are adapted on fewer tokens, often (not always) aLoRA requires higher rank (`r`) than LoRA. `r=32` can be a good starting point.
* aLoRA weights cannot be merged into the base model by definition, since the adapter weights are selectively applied to a subset of tokens. Attempts to merge will throw errors.
* Beam search is not yet supported.
* It is generally not recommended to add new tokens to the tokenizer that are not present in the base model, as this can complicate the target use case of both the base model and adapter model operating on overlapping context. That said, there is a possible workaround by first efficiently adding [trainable tokens](https://huggingface.co/docs/peft/en/package_reference/trainable_tokens) to the base model prior to training the adapter.

#### Choice of invocation sequence and SFT design 

Each input must have the `alora_invocation_tokens` sequence present, it is not added automatically. To maximize model performance without compromising cache reuse, it is recommended to have the adapter weights activated early, i.e. at the start of any adapter-specific prompting, but after any long inputs such as prior generations or documents. As with any model,
formatting should be consistent between train and test.

Consider the following example, where the base model has a chat template,
and the goal is to train the adapter to generate a desired output. 

* Option 1: If there is no task-specific prompt, i.e. the input is a chat history with the `assistant` prompt, then the chat template's `assistant` prompt (e.g. `<|start_of_role|>assistant<|end_of_role|>`) is a natural choice for the invocation string. See the model's chat template to find the prompt for the model.
* Option 2: If there is a task-specific prompt for the adapter that describes the task the adapter is learning, and that prompt is put as a `user` turn immediately prior to the generation, then the chat template's `user` prompt (e.g. `<|start_of_role|>user<|end_of_role|>`) is a natural choice for the invocation string.

Once deciding on an invocation string, get the model tokenizer and obtain `alora_invocation_tokens` as 
```
alora_invocation_tokens = tokenizer.encode(invocation_string, add_special_tokens=False).
```

An example inference setup is at [alora finetuning](https://github.com/huggingface/peft/blob/main/examples/alora_finetuning/alora_finetuning.py).

**Note** If using custom strings for the invocation string, make sure that the start and end of the string are special tokens to avoid issues with tokenization at the boundaries. 

To see why, imagine that 'a', 'b', 'c', and 'ab' are tokens in your tokenizer (numbers 1, 2, 3, 4 respectively). Suppose that your alora_invocation_tokens = [2, 3]. Now imagine your input string is "abc". Because "ab" is a token, this will get tokenized as [4,3]. So the alora_invocation_tokens will fail to be found, despite the string "bc" being in it. If the start and end of the invocation string are special tokens, however, this failure case will never happen since special tokens are never tokenized into the same token with other characters.

#### Using (and reusing) cache for generation
The main purpose of Activated LoRA is to make KV cache interchangeable between the base model and aLoRA adapter models **prior to the invocation sequence** since base and adapted KV values are not compatible. Specifically, keys and values stored during one model generation can be used in subsequent generations to avoid expensive prefill operations for context tokens. When sharing cache between the base model and aLoRA adapters, there are 2 main patterns:
1. The base model has generated something, and an aLoRA adapter is then called to do a followup generation. Example: the base model answers a question, and an aLoRA trained to detect hallucinations checks the base model response.
2. An aLoRA adapter has generated something, and the base model or a different aLoRA adapter is called to do a followup generation where there is partial context overlap with the original aLoRA. Example: The user provides a query, and an aLoRA rewrites the query to be more self-contained and improve retrieval in a RAG system. Then, documents are retrieved and loaded into context, an aLoRA checks if these documents are indeed relevant to the question, and then the base model generates an answer.


To demonstrate the above behaviors when using caching, we're using [DynamicCache](https://huggingface.co/docs/transformers/en/kv_cache) from `transformers`. Care must be taken to ensure that adapted cache values are not mixed with base cache values. In particular, an extra step is required for sharing the cache when there is partial context overlap (pattern 2).

**Pattern 1: Base model followed by aLoRA** Here, the entire input and generation from the base model is input into the aLoRA adapter, along with the invocation sequence:
```
from transformers import DynamicCache
...
cache = DynamicCache()
inputs_base = tokenizer(prompt_base, return_tensors="pt")
# Generate from base model and save cache
with model_alora.disable_adapter(): 
    output = model_alora.generate(inputs_base["input_ids"].to(device),attention_mask=inputs_base["attention_mask"].to(device),past_key_values = cache,return_dict_in_generate=True)
output_text_base = tokenizer.decode(output.sequences[0])
cache = output.past_key_values

# Generate with aLoRA adapter from cache
prompt_alora = output_text + INVOCATION_STRING
inputs_alora = tokenizer(prompt_alora, return_tensors="pt").to(device)
output = model_alora.generate(**inputs_alora, past_key_values=cache)
output_text_alora = tokenizer.decode(output[0])

# Note: cache is now tainted with adapter values and cannot be used in base model from here on!
```

**Pattern 2: aLoRA generation followed by base model (or another aLoRA) with partial context overlap** Here, we prefill the shared context using the base model, and then generate.

```
from transformers import DynamicCache
import copy
...
cache = DynamicCache()
inputs_shared = tokenizer(prompt_shared, return_tensors="pt").to(device)

# Prefill from base model and save cache
with model_alora.disable_adapter():
    with torch.no_grad():
        model_alora(**inputs_shared, past_key_values=cache)
cache_copy = copy.deepcopy(cache)

# Generate from aLoRA using prefilled cache
prompt_alora = prompt_shared + INVOCATION_STRING
inputs_alora = tokenizer(prompt_alora, return_tensors="pt").to(device)
output = model_alora.generate(**inputs_alora, past_key_values=cache)
output_text_alora = tokenizer.decode(output[0])

# Generate from base model using saved cache not tainted by aLoRA KV values
prompt_base = prompt_shared
inputs_base = tokenizer(prompt_base, return_tensors="pt").to(device)
with model_alora.disable_adapter(): 
    output = model_alora.generate(**inputs_base, past_key_values=cache_copy)
output_text_base = tokenizer.decode(output[0])
```

### Weight-Decomposed Low-Rank Adaptation (DoRA)

This technique decomposes the updates of the weights into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is handled by a separate learnable parameter. This can improve the performance of LoRA, especially at low ranks. For more information on DoRA, see  https://huggingface.co/papers/2402.09353.

```py
from peft import LoraConfig

config = LoraConfig(use_dora=True, ...)
```

If parts of the model or the DoRA adapter are offloaded to CPU you can get a significant speedup at the cost of some temporary (ephemeral) VRAM overhead by using `ephemeral_gpu_offload=True` in `config.runtime_config`.

```py
from peft import LoraConfig, LoraRuntimeConfig

config = LoraConfig(use_dora=True, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=True), ...)
```

A `PeftModel` with a DoRA adapter can also be loaded with `ephemeral_gpu_offload=True` flag using the `from_pretrained` method as well as the `load_adapter` method.

```py
from peft import PeftModel

model = PeftModel.from_pretrained(base_model, peft_model_id, ephemeral_gpu_offload=True)
```

DoRA is optimized (computes faster and takes less memory) for models in the evaluation mode, or when dropout is set to 0. We reuse the
base result at those times to get the speedup.
Running [dora finetuning](https://github.com/huggingface/peft/blob/main/examples/dora_finetuning/dora_finetuning.py)
with `CUDA_VISIBLE_DEVICES=0 ZE_AFFINITY_MASK=0 time python examples/dora_finetuning/dora_finetuning.py --quantize --lora_dropout 0 --batch_size 16 --eval_step 2 --use_dora`
on a 4090 with gradient accumulation set to 2 and max step to 20 resulted with the following observations:

| | Without Optimization | With Optimization |
| :--: | :--: | :--: |
| train_runtime | 359.7298 | **279.2676** |
| train_samples_per_second | 1.779 | **2.292** |
| train_steps_per_second | 0.056 | **0.072** |

#### Caveats

- DoRA only supports embedding, linear, and Conv2d layers at the moment.
- DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference, see [LoraModel.merge_and_unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_and_unload).
- DoRA should work with weights quantized with bitsandbytes ("QDoRA"). However, issues have been reported when using QDoRA with DeepSpeed Zero2.

### QLoRA-style training

The default LoRA settings in PEFT add trainable weights to the query and value layers of each attention block. But [QLoRA](https://hf.co/papers/2305.14314), which adds trainable weights to all the linear layers of a transformer model, can provide performance equal to a fully finetuned model. To apply LoRA to all the linear layers, like in QLoRA, set `target_modules="all-linear"` (easier than specifying individual modules by name which can vary depending on the architecture).

```py
config = LoraConfig(target_modules="all-linear", ...)
```

### Memory efficient Layer Replication with LoRA

An approach used to improve the performance of models is to expand a model by duplicating layers in the model to build a larger model from a pretrained model of a given size. For example increasing a 7B model to a 10B model as described in the [SOLAR](https://huggingface.co/papers/2312.15166) paper. PEFT LoRA supports this kind of expansion in a memory efficient manner that supports further fine-tuning using LoRA adapters attached to the layers post replication of the layers. The replicated layers do not take additional memory as they share the underlying weights so the only additional memory required is the memory for the adapter weights. To use this feature you would create a config with the `layer_replication` argument.

```py
config = LoraConfig(layer_replication=[[0,4], [2,5]], ...)
```

Assuming the original model had 5 layers `[0, 1, 2 ,3, 4]`, this would create a model with 7 layers arranged as `[0, 1, 2, 3, 2, 3, 4]`. This follows the [mergekit](https://github.com/arcee-ai/mergekit) pass through merge convention where sequences of layers specified as start inclusive and end exclusive tuples are stacked to build the final model. Each layer in the final model gets its own distinct set of LoRA adapters.

[Fewshot-Metamath-OrcaVicuna-Mistral-10B](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B) is an example of a model trained using this method on Mistral-7B expanded to 10B. The
[adapter_config.json](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B/blob/main/adapter_config.json) shows a sample LoRA adapter config applying this method for fine-tuning.

### Fine grained control over ranks and alpha (scaling)

By default, all layers targeted with LoRA will have the same rank `r` and the same `lora_alpha` (which determines the LoRA scaling), depending on what was specified in the [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig). In some cases, however, you may want to indicate different values for different layers. This is possible by passing the `rank_pattern` and `alpha_pattern` arguments to [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig). These arguments should be dictionaries with the key being the layer name and the value being the rank/alpha value. The keys can be [regular expressions](https://docs.python.org/3/library/re.html) (regex). All LoRA layers that are not explicitly mentioned in `rank_pattern` and `alpha_pattern` will take the default `r` and `lora_alpha` values.

To give an example, let's assume that we have a model with the following structure:

```python
>>> print(model)
Outer(
  (foo): Linear(...)
  (module): Middle(
    (foo): Linear(...)
    (foobar): Linear(...)
    (module): Inner(
      (foo): Linear(...)
      (barfoo): Linear(...)
    )
  )
)
```

- `rank_pattern={"foo": 42}` will match all 3 `foo` layers. Neither `foobar` nor `barfoo` are matched.
- `rank_pattern={"^foo": 42}` will only match the `foo` layer of the model, but neither `module.foo` nor `module.module.foo`. This is because the `^` means "start of string" when using regular expressions, and only `foo` starts with `"foo"`, the other layer names have prefixes.
- `rank_pattern={"^module.foo": 42}` matches only `module.foo`, but not `module.module.foo`, for the same reason.
- `rank_pattern={"module.foo": 42}` matches both `module.foo` and `module.module.foo`, but not `foo`.
- `rank_pattern={"^foo": 42, "^module.module.foo": 55}` matches `foo` and `module.module.foo`, respectively, but not `module.foo`.
- There is no need to indicate `$` to mark the end of the match, as this is added automatically by PEFT.

The same logic applies to `alpha_pattern`. If you're in doubt, don't try to get fancy with regular expressions -- just pass the full name for each module with a different rank/alpha, preceded by the `^` prefix, and you should be good.

### Targeting `nn.Parameter` directly

> [!WARNING]
> This feature is experimental and subject to change.

Generally, you should use `target_modules` to target the module (e.g. `nn.Linear`). However, in some circumstances, this is not possible. E.g., in many mixture of expert (MoE) layers in HF Transformers, instead of using `nn.Linear`, an `nn.Parameter` is used. PEFT normally overwrites the `forward` method for LoRA, but for `nn.Parameter`, there is none. Therefore, to apply LoRA to that parameter, it needs to be targeted with `target_parameters`. As an example, for [Llama4](https://huggingface.co/collections/meta-llama/llama-4-67f0c30d9fe03840bc9d0164), you can pass: `target_parameters=['feed_forward.experts.gate_up_proj', 'feed_forward.experts.down_proj]`.

#### Caveats

- At the moment, this argument allows to target 2-dim or 3-dim `nn.Parameter`s. It is assumed that in the case of a 3-dim parameter, the 0th dimension is the expert dimension.
- It is currently not possible to add multiple LoRA adapters (via `model.add_adapter` or `model.load_adapter`) that use `target_parameters` at the same time.

## Optimizers

LoRA training can optionally include special purpose optimizers. Currently PEFT supports LoRA-FA and LoRA+.

### LoRA-FA Optimizer

LoRA training can be more effective and efficient using LoRA-FA, as described in [LoRA-FA](https://huggingface.co/papers/2308.03303). LoRA-FA reduces activation memory consumption by fixing the matrix A and only tuning the matrix B. During training, the gradient of B is optimized to approximate the full parameter fine-tuning gradient. Moreover, the memory consumption of LoRA-FA is not sensitive to the rank (since it erases the activation of $A$), therefore it can improve performance by enlarging lora rank without increasing memory consumption.

```py
from peft import LoraConfig, get_peft_model
from peft.optimizers import create_lorafa_optimizer
from transformers import Trainer, get_cosine_schedule_with_warmup

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")

config = LoraConfig(...)
model = get_peft_model(base_model, config)

optimizer = create_lorafa_optimizer(
    model=model,
    r=128,
    lora_alpha=32,
    lr=7e-5,
)

scheduler = get_cosine_schedule_with_warmup(
    optimizer,
    num_warmup_steps=100,
    num_training_steps=1000,
)

trainer = Trainer(
    ...,
    optimizers=(optimizer, scheduler),
)
```

### LoRA+ optimized LoRA

LoRA training can be optimized using [LoRA+](https://huggingface.co/papers/2402.12354), which uses different learning rates for the adapter matrices A and B, shown to increase finetuning speed by up to 2x and performance by 1-2%.

```py
from peft import LoraConfig, get_peft_model
from peft.optimizers import create_loraplus_optimizer
from transformers import Trainer
import bitsandbytes as bnb

base_model = ...
config = LoraConfig(...)
model = get_peft_model(base_model, config)

optimizer = create_loraplus_optimizer(
    model=model,
    optimizer_cls=bnb.optim.Adam8bit,
    lr=5e-5,
    loraplus_lr_ratio=16,
)
scheduler = None

...
trainer = Trainer(
    ...,
    optimizers=(optimizer, scheduler),
)
```

## Efficiently train tokens alongside LoRA

Sometimes it is necessary to not only change some layer's weights but to add new tokens as well. With larger models this can be a memory-costly endeavour. PEFT LoRA adapters support the `trainable_token_indices` parameter which allows tuning of other tokens alongside fine-tuning of specific layers with LoRA. This method only trains the tokens you specify and leaves all other tokens untouched. This saves memory and doesn't throw away learned context of existing token embeddings in contrast to when training the whole embedding matrix. Under the hood this method uses the layer of [TrainableTokensModel](/docs/peft/v0.18.0.rc0/en/package_reference/trainable_tokens#peft.TrainableTokensModel).

```py
# for layer 'embed_tokens'
config = LoraConfig(trainable_token_indices=[idx_1, idx_2, ...], ...)

# specific embedding layer
config = LoraConfig(trainable_token_indices={'emb_tokens': [idx_1, idx_2, ...]}, ...)
```

In the snippet below we show how to add new tokens to the model and how to train it alongside the other layers in the model.

```py
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import get_peft_model, LoraConfig

base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")

# we define our new tokens and add them to the tokenizer as special tokens
special_tokens = ['<|start_think|>', '<|stop_think|>']
tokenizer.add_special_tokens({'additional_special_tokens': special_tokens})

# make room for new tokens in the embedding matrix if it isn't big enough already
base_model.resize_token_embeddings(max(len(tokenizer), base_model.model.embed_tokens.num_embeddings))

# typical LoRA config with `trainable_token_indices` targeting embedding layer `embed_tokens`
# and specifically our new tokens we just added
lora_config = LoraConfig(
    target_modules='all-linear',
    trainable_token_indices={'embed_tokens': tokenizer.convert_tokens_to_ids(special_tokens)},
)
peft_model = get_peft_model(base_model, lora_config)

# proceed to train the model like normal
[...]
```

The token weights are part of your adapter state dict and saved alongside the LoRA weights.
If we would have used full fine-tuning with `modules_to_save=['embed_tokens']` we would have stored the full embedding matrix in the checkpoint, leading to a much bigger file.

To give a bit of an indication how much VRAM can be saved, a rudimentary comparison of the above example was made between training the embedding matrix fully (`modules_to_save=["embed_tokens"]`), using a LoRA for the embedding matrix (`target_modules=[..., "embed_tokens"]`, rank 32) and trainable tokens (`trainable_token_indices=[...]`, 6 tokens). Trainable tokens used about as much VRAM (15,562MB vs. 15,581MB) as LoRA while being specific to the tokens and saved ~1GB of VRAM over fully training the embedding matrix.


## Merge LoRA weights into the base model

While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA adapter. To eliminate latency, use the [merge_and_unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_and_unload) function to merge the adapter weights with the base model. This allows you to use the newly merged model as a standalone model. The [merge_and_unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_and_unload) function doesn't keep the adapter weights in memory.

Below is a diagram that explains the intuition of LoRA adapter merging:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"/>
</div>

We show in the snippets below how to run that using PEFT.

```py
from transformers import AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
model = model.merge_and_unload()
```

It is important to assign the returned model to a variable and use it, [merge_and_unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_and_unload) is not an in-place operation. If you need to keep a copy of the weights so you can unmerge the adapter later or delete and load different ones, you should use the [merge_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.merge_adapter) function instead. Now you have the option to use [unmerge_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.unmerge_adapter) to return the base model.

```py
from transformers import AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
model.merge_adapter()

# unmerge the LoRA layers from the base model
model.unmerge_adapter()
```

The [add_weighted_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraModel.add_weighted_adapter) function is useful for merging multiple LoRAs into a new adapter based on a user provided weighting scheme in the `weights` parameter. Below is an end-to-end example.

First load the base model:

```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
import torch

base_model = AutoModelForCausalLM.from_pretrained(
    "mistralai/Mistral-7B-v0.1", dtype=torch.float16, device_map="auto"
)
```

Then we load the first adapter:

```python
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id, adapter_name="sft")
```

Then load a different adapter and merge it with the first one:

```python
weighted_adapter_name = "sft-dpo"
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
model.add_weighted_adapter(
    adapters=["sft", "dpo"],
    weights=[0.7, 0.3],
    adapter_name=weighted_adapter_name,
    combination_type="linear"
)
model.set_adapter(weighted_adapter_name)
```

> [!TIP]
> There are several supported methods for `combination_type`. Refer to the [documentation](../package_reference/lora#peft.LoraModel.add_weighted_adapter) for more details. Note that "svd" as the `combination_type` is not supported when using `torch.float16` or `torch.bfloat16` as the datatype.

Now, perform inference:

```python
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"

tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")

prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}

with torch.no_grad():
    generate_ids = model.generate(**inputs, max_length=30)
outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(outputs)
```

## Load adapters

Adapters can be loaded onto a pretrained model with [load_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.load_adapter), which is useful for trying out different adapters whose weights aren't merged. Set the active adapter weights with the [set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.set_adapter) function.

```py
from transformers import AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)

# load different adapter
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")

# set adapter as active
model.set_adapter("dpo")
```

To return the base model, you could use [unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.unload) to unload all of the LoRA modules or [delete_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.delete_adapter) to delete the adapter entirely. [unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.unload) is not an in-place operation, remember to assign the returned model to a variable and use it.

```py
# unload adapter
model = model.unload()

# delete adapter
model.delete_adapter("dpo")
```

## Inference with different LoRA adapters in the same batch

Normally, each inference batch has to use the same adapter(s) in PEFT. This can sometimes be annoying, because we may have batches that contain samples intended to be used with different LoRA adapters. For example, we could have a base model that works well in English and two more LoRA adapters, one for French and one for German. Usually, we would have to split our batches such that each batch only contains samples of one of the languages, we cannot combine different languages in the same batch.

Thankfully, it is possible to mix different LoRA adapters in the same batch using the `adapter_name` argument. Below, we show an example of how this works in practice. First, let's load the base model, English, and the two adapters, French and German, like this:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

model_id = ...
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)
# load the LoRA adapter for French
peft_model = PeftModel.from_pretrained(model, <path>, adapter_name="adapter_fr")
# next, load the LoRA adapter for German
peft_model.load_adapter(<path>, adapter_name="adapter_de")
```

Now, we want to generate text on a sample that contains all three languages: The first three samples are in English, the next three are in French, and the last three are in German. We can use the `adapter_names` argument to specify which adapter to use for each sample. Since our base model is used for English, we use the special string `"__base__"` for these samples. For the next three samples, we indicate the adapter name of the French LoRA fine-tune, in this case `"adapter_fr"`. For the last three samples, we indicate the adapter name of the German LoRA fine-tune, in this case `"adapter_de"`. This way, we can use the base model and the two adapters in a single batch.

```python
inputs = tokenizer(
    [
        "Hello, my dog is cute",
        "Hello, my cat is awesome",
        "Hello, my fish is great",
        "Salut, mon chien est mignon",
        "Salut, mon chat est génial",
        "Salut, mon poisson est super",
        "Hallo, mein Hund ist süß",
        "Hallo, meine Katze ist toll",
        "Hallo, mein Fisch ist großartig",
    ],
    return_tensors="pt",
    padding=True,
)

adapter_names = [
    "__base__", "__base__", "__base__",
    "adapter_fr", "adapter_fr", "adapter_fr",
    "adapter_de", "adapter_de", "adapter_de",
]
output = peft_model.generate(**inputs, adapter_names=adapter_names, max_new_tokens=20)
```

Note that the order does not matter here, i.e. the samples in the batch don't need to be grouped by adapter as in the example above. We just need to ensure that the `adapter_names` argument is aligned correctly with the samples.

Additionally, the same approach also works with the `modules_to_save` feature, which allows for saving and reusing specific neural network layers, such as custom heads for classification tasks, across different LoRA adapters.

### Caveats

Using this feature has some drawbacks, namely:

- It only works for inference, not for training.
- Disabling adapters using the `with model.disable_adapter()` context takes precedence over `adapter_names`.
- You cannot pass `adapter_names` when some adapter weights were merged with base weight using the `merge_adapter` method. Please unmerge all adapters first by calling `model.unmerge_adapter()`.
- For obvious reasons, this cannot be used after calling `merge_and_unload()`, since all the LoRA adapters will be merged into the base weights in this case.
- This feature does not currently work with DoRA, so set `use_dora=False` in your `LoraConfig` if you want to use it.
- The `modules_to_save` feature is currently only supported for the layers of types `Linear`, `Embedding`, `Conv2d` and `Conv1d`.
- There is an expected overhead for inference with `adapter_names`, especially if the amount of different adapters in the batch is high. This is because the batch size is effectively reduced to the number of samples per adapter. If runtime performance is your top priority, try the following:
  - Increase the batch size.
  - Try to avoid having a large number of different adapters in the same batch, prefer homogeneous batches. This can be achieved by buffering samples with the same adapter and only perform inference with a small handful of different adapters.
  - Take a look at alternative implementations such as [LoRAX](https://github.com/predibase/lorax), [punica](https://github.com/punica-ai/punica), or [S-LoRA](https://github.com/S-LoRA/S-LoRA), which are specialized to work with a large number of different adapters.

## Composing and Reusing LoRA Adapters
### Arrow
[Arrow](https://huggingface.co/papers/2405.11157) is a modular routing algorithm designed to combine multiple pre-trained task-specific LoRA adapters to solve a given task. Rather than merging all adapters naively, Arrow introduces a **gradient-free, token-wise mixture-of-experts (MoE) routing mechanism**. At inference time, it first computes a _prototype_ for each LoRA by extracting the top right singular vector from its SVD decomposition. Each token representation is then compared to these prototypes via cosine similarity to obtain routing coefficients. Tokens are assigned to the top-k most relevant LoRA adapters, with the coefficients normalized through softmax, and their outputs linearly combined. This allows effective reuse of existing LoRA modules for new tasks and leads to stronger zero-shot generalization.

In PEFT, Arrow is enabled through ```ArrowConfig``` and ```create_arrow_model```. You can also configure parameters such as ```top_k``` (the number of LoRA adapters combined per token), ```router_temperature``` (the softmax temperature applied to the routing coefficients), and ```rng_seed``` (for reproducibility). 

```py
from peft import create_arrow_model, ArrowConfig
from transformers import AutoModelForCausalLM

# Loading the model
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")

# Creating the Arrow config
arrow_config = ArrowConfig(
    top_k=3,
    router_temperature=1.0,
    rng_seed=42,
)

# The LoRA adapters below were trained on a clustered FLAN dataset.
# Task clustering was performed using the Model-Based Clustering (MBC) method,
# as described in the Arrow paper.
# While one could train a separate LoRA for each task and let Arrow route tokens among them,
# training LoRAs on clusters of tasks instead provides an indirect optimization for
# transfer across the multi-task dataset.
task_specific_adapter_paths = [
        f"TahaBa/phi3-mini-clustered-flan/ts_expert_{i}" for i in range(10)
    ]

# Creating the Arrow model
model = create_arrow_model(
        base_model=base_model,
        task_specific_adapter_paths=task_specific_adapter_paths,
        arrow_config=arrow_config,
    )

# Now the forward path could be called on this model, like a normal PeftModel.
```

Furthermore, you can add or remove adapters after calling ```create_arrow_model```—for example, to fine-tune a new adapter or discard an unnecessary one. Once the adapters are in place, you can activate the ```"arrow_router"``` for inference to use Arrow. Note that if you add a new LoRA adapter after ```create_arrow_model``` and want to fine-tune it, you must explicitly set the new adapter as active, since ```"arrow_router"``` is activated by default in ```create_arrow_model```.

```py
from trl import SFTTrainer, SFTConfig

# Adding a new adapter and activating it
model.add_adapter(adapter_name='new_adapter')
model.set_adapter('new_adapter')

# Now the model could be trained along the `new_adapter`.
trainer = SFTTrainer(
        model=model,
        args=SFTConfig(...),
        ...
    )

# Once the training is done, you can activate `arrow_router` and use it in inference
model.set_adapter('arrow_router')    # Model is ready to be used at inference time now
```

### GenKnowSub
[GenKnowSub](https://aclanthology.org/2025.acl-short.54/) augments Arrow by purifying task-specific LoRA adapters before routing. The key idea is to subtract general knowledge encoded in LoRA space—based on the [forgetting-via-negation principle](https://huggingface.co/papers/2212.04089)—so that task adapters become more isolated and focused on task-relevant signals. Concretely, GenKnowSub estimates a low-dimensional “general” subspace from a set of general (non task-specific) LoRA adapters and removes this component from each task adapter’s LoRA update prior to Arrow’s token-wise routing. This typically improves compositionality and reduces interference when combining many task adapters.

In PEFT, enable GenKnowSub by setting ```use_gks=True``` in ArrowConfig, and providing ```general_adapter_paths``` in ```create_arrow_model```:

```py
from peft import create_arrow_model, ArrowConfig
from transformers import AutoModelForCausalLM

# Loading the model
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")

# Creating the Arrow config
arrow_config = ArrowConfig(
    top_k=3,
    router_temperature=1.0,
    use_gks=True,
    rng_seed=42,
)

# Path to task-specific, trained on flan clustered dataset (as we explained before.)
task_specific_adapter_paths = [
        f"TahaBa/phi3-mini-clustered-flan/ts_expert_{i}" for i in range(10)
    ]
# These general adapters are trained on English, German, and French Wikipedia dataset,
# with causal language modelling objective, each pair like: (507 token tsentence, 5 token completion), and the loss computed on the completion
general_adapter_paths = [
        "TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langen/checkpoint-17",
        "TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langfr/checkpoint-35",
        "TahaBa/phi3-mini-general-adapters/cluster0_batch16_prop1.0_langger/checkpoint-17"
    ]

# Creating the Arrow model
model = create_arrow_model(
        base_model=base_model,
        task_specific_adapter_paths=task_specific_adapter_paths,
        general_adapter_paths=general_adapter_paths,
        arrow_config=arrow_config,
    )

# Now the forward path could be called on this model, like a normal PeftModel.
```
To encode general knowledge, GenKnowSub subtracts the average of the provided general adapters from each task-specific adapter once, before routing begins. Furthermore, the ability to add or remove adapters after calling ```create_arrow_model``` (as described in the Arrow section) is still supported in this case.

> [!TIP]
> **Things to keep in mind when using Arrow + GenKnowSub:**
>
> - All LoRA adapters (task-specific and general) must share the same ```rank``` and ```target_modules```.
>
> - Any inconsistency in these settings will raise an error in ```create_arrow_model```.
>
> - Having different scaling factors (```lora_alpha```) across task adapters is supported — Arrow handles them automatically.
>
> - Merging the ```"arrow_router"``` is not supported, due to its dynamic routing behavior.
>
> - In create_arrow_model, task adapters are loaded as ```task_i``` and general adapters as ```gks_j``` (where ```i``` and ```j``` are indices). The function ensures consistency of ```target_modules```, ```rank```, and whether adapters are applied to ```Linear``` or ```Linear4bit``` layers. It then adds the ```"arrow_router"``` module and activates it. Any customization of this process requires overriding ```create_arrow_model```.
>
> - This implementation is compatible with 4-bit quantization (via bitsandbytes):
>
>     ```py
>     from transformers import AutoModelForCausalLM, BitsAndBytesConfig
>     import torch
>
>     # Quantisation config
>     bnb_config = BitsAndBytesConfig(
>             load_in_4bit=True,
>             bnb_4bit_quant_type="nf4",
>             bnb_4bit_compute_dtype=torch.bfloat16,
>             bnb_4bit_use_double_quant=False,
>         )
>
>     # Loading the model
>     base_model = AutoModelForCausalLM.from_pretrained(
>         "microsoft/Phi-3-mini-4k-instruct",
>         dtype=torch.bfloat16,
>         device_map="auto",
>         quantization_config=bnb_config,
>     )
>
>     # Now call create_arrow_model() as we explained before.
>     ```

<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/lora.md" />

### torch.compile
https://huggingface.co/docs/peft/v0.18.0.rc0/developer_guides/torch_compile.md

# torch.compile

In PEFT, [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) works for some but not all features. The reason why it won't always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for `torch.compile`. In other places, `torch.compile` may work, but won't be as fast as expected because of graph breaks.

If you don't see an error, it doesn't necessarily mean that `torch.compile` worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with `torch.compile` and what doesn't. For your own testing, we recommend using the latest PyTorch version, as `torch.compile` is constantly being improved.

> [!TIP]
> Unless indicated otherwise, the default `torch.compile` settings were used.

## Training and inference with `torch.compile`

These features **work** with `torch.compile`. Everything listed below was tested with a causal LM:

- Training with `Trainer` from 🤗 transformers
- Training with a custom PyTorch loop
- Inference
- Generation

The following adapters were tested successfully:

- AdaLoRA
- BOFT
- Bone
- IA³
- Layer Norm Tuning
- LoHa
- LoKr
- LoRA
- LoRA + DoRA
- LoRA applied to embedding layers
- OFT
- VeRA
- HRA

## Advanced PEFT features with `torch.compile`

Below are some of the more advanced PEFT features that **work**. They were all tested with LoRA.

- `modules_to_save` (i.e. `config = LoraConfig(..., modules_to_save=...)`)
- Merging adapters (one or multiple)
- Merging multiple adapters into one adapter (i.e. calling `model.add_weighted_adapter(...)`)
- Using PEFT adapters with quantization (bitsandbytes)
- Disabling adapters (i.e. using `with model.disable_adapter()`)
- Unloading (i.e. calling `model.merge_and_unload()`)
- Mixed adapter batches (i.e. calling `model(batch, adapter_names=["__base__", "default", "other", ...])`)
- Inference with multiple adapters (i.e. using `model.add_adapter` or `model.load_adapter` to load more than 1 adapter); for this, only call `torch.compile` _after_ loading all adapters

Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type.

## Test cases

All the use cases listed above are tested inside of [`peft/tests/test_torch_compile.py`](https://github.com/huggingface/peft/blob/main/tests/test_torch_compile.py). If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case.

> [!TIP]
> If you have another use case where you know that `torch.compile` does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/torch_compile.md" />

### Fully Sharded Data Parallel
https://huggingface.co/docs/peft/v0.18.0.rc0/accelerate/fsdp.md

# Fully Sharded Data Parallel

[Fully sharded data parallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.

Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT. 

# Use PEFT and FSDP
This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and FSDP on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.

## Configuration

Start by running the following command to [create a FSDP configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.

The configuration file is used to set the default options when you launch the training script.

```bash
accelerate config --config_file fsdp_config.yaml
```

You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll answer the questionnaire as shown in the image below.
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/fsdp-peft-config.png"/>
</div>
<small>Creating Accelerate's config to use FSDP</small>

Once this is done, the corresponding config should look like below and you can find it in config folder at [fsdp_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config.yaml):

```yml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_backward_prefetch: BACKWARD_PRE
  fsdp_cpu_ram_efficient_loading: true
  fsdp_forward_prefetch: false
  fsdp_offload_params: false
  fsdp_sharding_strategy: FULL_SHARD
  fsdp_state_dict_type: SHARDED_STATE_DICT
  fsdp_sync_module_states: true
  fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```

## Launch command

The launch command is available at [run_peft_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_fsdp.sh) and it is also shown below:
```bash
accelerate launch --config_file "configs/fsdp_config.yaml"  train.py \
--seed 100 \
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
--dataset_name "smangrul/ultrachat-10k-chatml" \
--chat_template_format "chatml" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 2048 \
--num_train_epochs 1 \
--logging_steps 5 \
--log_level "info" \
--logging_strategy "steps" \
--eval_strategy "epoch" \
--save_strategy "epoch" \
--push_to_hub \
--hub_private_repo True \
--hub_strategy "every_save" \
--bf16 True \
--packing True \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 1e-4 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "llama-sft-lora-fsdp" \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--gradient_accumulation_steps 4 \
--gradient_checkpointing True \
--use_reentrant False \
--dataset_text_field "content" \
--use_flash_attn True \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization False
```

Notice that we are using LoRA with  rank=8, alpha=16 and targeting all linear layers. We are passing the FSDP config file and finetuning the 70B Llama model on a subset of the [ultrachat dataset](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).

## The important parts

Let's dive a little deeper into the script so you can see what's going on, and understand how it works.

The first thing to know is that the script uses FSDP for distributed training as the FSDP config has been passed. The [SFTTrainer](https://huggingface.co/docs/trl/v0.24.0/en/sft_trainer#trl.SFTTrainer) class handles all the heavy lifting of creating PEFT model using the peft config that is passed. After that when you call `trainer.train()`, Trainer internally uses 🤗 Accelerate to prepare model, optimizer and trainer using the FSDP config to create FSDP wrapped model which is then trained. The main code snippet is below:

```python
# trainer
trainer = SFTTrainer(
    model=model,
    processing_class=tokenizer,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    peft_config=peft_config,
)
trainer.accelerator.print(f"{trainer.model}")
if model_args.use_peft_lora:
    # handle PEFT+FSDP case
    trainer.model.print_trainable_parameters()
    if getattr(trainer.accelerator.state, "fsdp_plugin", None):
        from peft.utils.other import fsdp_auto_wrap_policy

        fsdp_plugin = trainer.accelerator.state.fsdp_plugin
        fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)

# train
checkpoint = None
if training_args.resume_from_checkpoint is not None:
    checkpoint = training_args.resume_from_checkpoint
trainer.train(resume_from_checkpoint=checkpoint)

# saving final model
if trainer.is_fsdp_enabled:
    trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model()
```


Here, one main thing to note currently when using FSDP with PEFT is that `use_orig_params` needs to be `False` to realize GPU memory savings. Due to `use_orig_params=False`, the auto wrap policy for FSDP needs to change so that trainable and non-trainable parameters are wrapped separately. This is done by the code snippt below which uses the util function `fsdp_auto_wrap_policy` from PEFT:

```
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
    from peft.utils.other import fsdp_auto_wrap_policy

    fsdp_plugin = trainer.accelerator.state.fsdp_plugin
    fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
```

## Memory usage

In the above example, the memory consumed per GPU is  72-80 GB (90-98%) as seen in the screenshot below. The slight increase in GPU memory at the end is when saving the model using `FULL_STATE_DICT` state dict type instead of the `SHARDED_STATE_DICT` so that the model has adapter weights that can be loaded normally with `from_pretrained` method during inference:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_fsdp_mem_usage.png"/>
</div>
<small>GPU memory usage for the training run</small>

# Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs

In this section, we will look at how to use QLoRA and FSDP for finetuning 70B llama model on 2X24GB GPUs. [Answer.AI](https://www.answer.ai/) in collaboration with bitsandbytes and Hugging Face 🤗 open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost [You can now train a 70b language model at home](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html). This is now integrated in Hugging Face ecosystem. 

For this, we first need `bitsandbytes>=0.43.3`, `accelerate>=1.0.1`, `transformers>4.44.2`, `trl>0.11.4` and `peft>0.13.0`. We need to set `fsdp_cpu_ram_efficient_loading=true`, `fsdp_use_orig_params=false` and `fsdp_offload_params=true`(cpu offloading) when using Accelerate config. When not using accelerate launcher, you can alternately set the environment variable `export FSDP_CPU_RAM_EFFICIENT_LOADING=true`.  Here, we will be using accelerate config and below is the config which can be found at [fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml):

```yml
compute_environment: LOCAL_MACHINE                                                                                                                                           
debug: false                                                                                                                                                                 
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_backward_prefetch: BACKWARD_PRE
  fsdp_cpu_ram_efficient_loading: true
  fsdp_forward_prefetch: false
  fsdp_offload_params: true
  fsdp_sharding_strategy: FULL_SHARD
  fsdp_state_dict_type: SHARDED_STATE_DICT
  fsdp_sync_module_states: true
  fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```

Launch command is given below which is available at [run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh):
```
accelerate launch --config_file "configs/fsdp_config_qlora.yaml"  train.py \
--seed 100 \
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
--dataset_name "smangrul/ultrachat-10k-chatml" \
--chat_template_format "chatml" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 2048 \
--num_train_epochs 1 \
--logging_steps 5 \
--log_level "info" \
--logging_strategy "steps" \
--eval_strategy "epoch" \
--save_strategy "epoch" \
--push_to_hub \
--hub_private_repo True \
--hub_strategy "every_save" \
--bf16 True \
--packing True \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 1e-4 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "llama-sft-qlora-fsdp" \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 2 \
--gradient_checkpointing True \
--use_reentrant True \
--dataset_text_field "content" \
--use_flash_attn True \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization True \
--use_nested_quant True \
--bnb_4bit_compute_dtype "bfloat16" \
--bnb_4bit_quant_storage_dtype "bfloat16"
```

Notice the new argument being passed, `bnb_4bit_quant_storage_dtype`, which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **16/4 = 4** 4-bit params are packed together post quantization. When using mixed precision training with `bfloat16`, `bnb_4bit_quant_storage_dtype` can be either `bfloat16` for pure `bfloat16` finetuning, or `float32` for automatic mixed precision (this consumes more GPU memory). When using mixed precision training with `float16`, `bnb_4bit_quant_storage_dtype` should be set to `float32` for stable automatic mixed precision training.

In terms of training code, the important code changes are: 

```diff
...

bnb_config = BitsAndBytesConfig(
    load_in_4bit=args.use_4bit_quantization,
    bnb_4bit_quant_type=args.bnb_4bit_quant_type,
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=args.use_nested_quant,
+   bnb_4bit_quant_storage=quant_storage_dtype,
)

...

model = AutoModelForCausalLM.from_pretrained(
    args.model_name_or_path,
    quantization_config=bnb_config,
    trust_remote_code=True,
    attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
+   dtype=quant_storage_dtype or torch.float32,
)
```

Notice that `dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.

## Memory usage

In the above example, the memory consumed per GPU is **19.6 GB** while CPU RAM usage is around **107 GB**. When disabling CPU offloading, the GPU memory usage is  **35.6 GB/ GPU**. Therefore, what took 16X80GB GPUs for full finetuning, 8X80GB GPUs with FSDP+LoRA, and a couple of 80GB GPUs with DDP+QLoRA, now requires 2X24GB GPUs. This makes finetuning of large models more accessible.

## More resources
You can also refer the [llama-recipes](https://github.com/facebookresearch/llama-recipes/?tab=readme-ov-file#fine-tuning) repo and [Getting started with Llama](https://llama.meta.com/get-started/#fine-tuning) guide on how to finetune using FSDP and PEFT.

## Caveats
1. Merging when using PEFT and FSDP is currently unsupported and will raise error.
2. Passing `modules_to_save` config parameter to is untested at present.
3. GPU Memory saving when using CPU Offloading is untested at present.
4. When using FSDP+QLoRA, `paged_adamw_8bit` currently results in an error when saving a checkpoint.
5. DoRA training with FSDP should work (albeit at lower speed than LoRA). If combined with bitsandbytes (QDoRA), 4-bit quantization should also work, but 8-bit quantization has known issues and is not recommended.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/accelerate/fsdp.md" />

### DeepSpeed
https://huggingface.co/docs/peft/v0.18.0.rc0/accelerate/deepspeed.md

# DeepSpeed

[DeepSpeed](https://www.deepspeed.ai/) is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization.

Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT. 

## Compatibility with `bitsandbytes` quantization + LoRA

Below is a table that summarizes the compatibility between PEFT's LoRA, [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library and DeepSpeed Zero stages with respect to fine-tuning. DeepSpeed Zero-1 and 2 will have no effect at inference as stage 1 shards the optimizer states and stage 2 shards the optimizer states and gradients:

| DeepSpeed stage   | Is compatible? |
|---|---|
| Zero-1 |  🟢 |
| Zero-2   |  🟢 |
| Zero-3  |  🟢 |

For DeepSpeed Stage 3 + QLoRA, please refer to the section [Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs](#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus) below.

For confirming these observations, we ran the SFT (Supervised Fine-tuning) [offical example scripts](https://github.com/huggingface/trl/tree/main/examples) of the [Transformers Reinforcement Learning (TRL) library](https://github.com/huggingface/trl) using QLoRA + PEFT and the accelerate configs available [here](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs). We ran these experiments on a 2x NVIDIA T4 GPU.

# Use PEFT and DeepSpeed with ZeRO3 for finetuning large models on multiple devices and multiple nodes

This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and ZeRO-3 on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.

## Configuration

Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.

The configuration file is used to set the default options when you launch the training script.

```bash
accelerate config --config_file deepspeed_config.yaml
```

You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 so make sure you pick those options.

```bash
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. Pass the same value as you would pass via cmd argument else you will encounter mismatch error.
`gradient_clipping`: Enable gradient clipping with value. Don't set this as you will be passing it via cmd arguments.
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. Set this as `none` as don't want to enable offloading.
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. Set this as `none` as don't want to enable offloading.
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. Set this to `True`.
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. Set this to `True`.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. Set this to `True`.
```

Once this is done, the corresponding config should look like below and you can find it in config folder at [deepspeed_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config.yaml):

```yml
compute_environment: LOCAL_MACHINE                                                                                                                                           
debug: false
deepspeed_config:
  deepspeed_multinode_launcher: standard
  gradient_accumulation_steps: 4
  offload_optimizer_device: none
  offload_param_device: none
  zero3_init_flag: true
  zero3_save_16bit_model: true
  zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```

## Launch command

The launch command is available at [run_peft_deepspeed.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh) and it is also shown below:
```bash
accelerate launch --config_file "configs/deepspeed_config.yaml"  train.py \
--seed 100 \
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
--dataset_name "smangrul/ultrachat-10k-chatml" \
--chat_template_format "chatml" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 2048 \
--num_train_epochs 1 \
--logging_steps 5 \
--log_level "info" \
--logging_strategy "steps" \
--eval_strategy "epoch" \
--save_strategy "epoch" \
--push_to_hub \
--hub_private_repo True \
--hub_strategy "every_save" \
--bf16 True \
--packing True \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 1e-4 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "llama-sft-lora-deepspeed" \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--gradient_accumulation_steps 4 \
--gradient_checkpointing True \
--use_reentrant False \
--dataset_text_field "content" \
--use_flash_attn True \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization False
```

Notice that we are using LoRA with  rank=8, alpha=16 and targeting all linear layers. We are passing the deepspeed config file and finetuning 70B Llama model on a subset of the ultrachat dataset.

## The important parts

Let's dive a little deeper into the script so you can see what's going on, and understand how it works.

The first thing to know is that the script uses DeepSpeed for distributed training as the DeepSpeed config has been passed. The [SFTTrainer](https://huggingface.co/docs/trl/v0.24.0/en/sft_trainer#trl.SFTTrainer) class handles all the heavy lifting of creating the PEFT model using the peft config that is passed. After that, when you call `trainer.train()`, [SFTTrainer](https://huggingface.co/docs/trl/v0.24.0/en/sft_trainer#trl.SFTTrainer) internally uses 🤗 Accelerate to prepare the model, optimizer and trainer using the DeepSpeed config to create DeepSpeed engine which is then trained. The main code snippet is below:

```python
# trainer
trainer = SFTTrainer(
    model=model,
    processing_class=tokenizer,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    peft_config=peft_config,
)
trainer.accelerator.print(f"{trainer.model}")

# train
checkpoint = None
if training_args.resume_from_checkpoint is not None:
    checkpoint = training_args.resume_from_checkpoint
trainer.train(resume_from_checkpoint=checkpoint)

# saving final model
trainer.save_model()
```

## Memory usage

In the above example, the memory consumed per GPU is 64 GB (80%) as seen in the screenshot below:

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_deepspeed_mem_usage.png"/>
</div>
<small>GPU memory usage for the training run</small>

## More resources
You can also refer this blog post [Falcon 180B Finetuning using 🤗 PEFT and DeepSpeed](https://medium.com/@sourabmangrulkar/falcon-180b-finetuning-using-peft-and-deepspeed-b92643091d99) on how to finetune 180B Falcon model on 16 A100 GPUs on 2 machines.


# Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs

In this section, we will look at how to use QLoRA and DeepSpeed Stage-3 for finetuning 70B llama model on 2X40GB GPUs.
For this, we first need `bitsandbytes>=0.43.3`, `accelerate>=1.0.1`, `transformers>4.44.2`, `trl>0.11.4` and `peft>0.13.0`. We need to set `zero3_init_flag` to true when using Accelerate config. Below is the config which can be found at [deepspeed_config_z3_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config_z3_qlora.yaml):

```yml
compute_environment: LOCAL_MACHINE                                                                                                                                           
debug: false
deepspeed_config:
  deepspeed_multinode_launcher: standard
  offload_optimizer_device: none
  offload_param_device: none
  zero3_init_flag: true
  zero3_save_16bit_model: true
  zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```

Launch command is given below which is available at [run_peft_qlora_deepspeed_stage3.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_deepspeed_stage3.sh):
```
accelerate launch --config_file "configs/deepspeed_config_z3_qlora.yaml"  train.py \
--seed 100 \
--model_name_or_path "meta-llama/Llama-2-70b-hf" \
--dataset_name "smangrul/ultrachat-10k-chatml" \
--chat_template_format "chatml" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 2048 \
--num_train_epochs 1 \
--logging_steps 5 \
--log_level "info" \
--logging_strategy "steps" \
--eval_strategy "epoch" \
--save_strategy "epoch" \
--push_to_hub \
--hub_private_repo True \
--hub_strategy "every_save" \
--bf16 True \
--packing True \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 1e-4 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "llama-sft-qlora-dsz3" \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 2 \
--gradient_checkpointing True \
--use_reentrant True \
--dataset_text_field "content" \
--use_flash_attn True \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization True \
--use_nested_quant True \
--bnb_4bit_compute_dtype "bfloat16" \
--bnb_4bit_quant_storage_dtype "bfloat16"
```

Notice the new argument being passed `bnb_4bit_quant_storage_dtype` which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **32/4 = 8** 4-bit params are packed together post quantization.

In terms of training code, the important code changes are: 

```diff
...

bnb_config = BitsAndBytesConfig(
    load_in_4bit=args.use_4bit_quantization,
    bnb_4bit_quant_type=args.bnb_4bit_quant_type,
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=args.use_nested_quant,
+   bnb_4bit_quant_storage=quant_storage_dtype,
)

...

model = AutoModelForCausalLM.from_pretrained(
    args.model_name_or_path,
    quantization_config=bnb_config,
    trust_remote_code=True,
    attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
+   dtype=quant_storage_dtype or torch.float32,
)
```

Notice that `dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.

## Memory usage

In the above example, the memory consumed per GPU is **36.6 GB**. Therefore, what took 8X80GB GPUs with DeepSpeed Stage 3+LoRA and a couple of 80GB GPUs with DDP+QLoRA now requires 2X40GB GPUs. This makes finetuning of large models more accessible.

# Use PEFT and DeepSpeed with ZeRO3 and CPU Offloading for finetuning large models on a single GPU
This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You'll configure the script to train a large model for conditional generation with ZeRO-3 and CPU Offload.

> [!TIP]
> 💡 To help you get started, check out our example training scripts for [causal language modeling](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_accelerate_ds_zero3_offload.py) and [conditional generation](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts.

## Configuration

Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.

The configuration file is used to set the default options when you launch the training script.

```bash
accelerate config --config_file ds_zero3_cpu.yaml
```

You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 along with CPU-Offload so make sure you pick those options.

```bash
`zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
`gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
`gradient_clipping`: Enable gradient clipping with value.
`offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
`offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
`zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
`zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
`mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. 
```

An example [configuration file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/accelerate_ds_zero3_cpu_offload_config.yaml) might look like the following. The most important thing to notice is that `zero_stage` is set to `3`, and `offload_optimizer_device` and `offload_param_device` are set to the `cpu`.

```yml
compute_environment: LOCAL_MACHINE
deepspeed_config:
  gradient_accumulation_steps: 1
  gradient_clipping: 1.0
  offload_optimizer_device: cpu
  offload_param_device: cpu
  zero3_init_flag: true
  zero3_save_16bit_model: true
  zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
machine_rank: 0
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
use_cpu: false
```

## The important parts

Let's dive a little deeper into the script so you can see what's going on, and understand how it works.

Within the [`main`](https://github.com/huggingface/peft/blob/2822398fbe896f25d4dac5e468624dc5fd65a51b/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L103) function, the script creates an [Accelerator](https://huggingface.co/docs/accelerate/v1.11.0/en/package_reference/accelerator#accelerate.Accelerator) class to initialize all the necessary requirements for distributed training.

> [!TIP]
> 💡 Feel free to change the model and dataset inside the `main` function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.

The script also creates a configuration for the 🤗 PEFT method you're using, which in this case, is LoRA. The [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig) specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, make sure you replace `LoraConfig` with the appropriate [class](../package_reference/tuners).

```diff
 def main():
+    accelerator = Accelerator()
     model_name_or_path = "facebook/bart-large"
     dataset_name = "twitter_complaints"
+    peft_config = LoraConfig(
         task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
     )
```

Throughout the script, you'll see the [main_process_first](https://huggingface.co/docs/accelerate/v1.11.0/en/package_reference/accelerator#accelerate.Accelerator.main_process_first) and [wait_for_everyone](https://huggingface.co/docs/accelerate/v1.11.0/en/package_reference/accelerator#accelerate.Accelerator.wait_for_everyone) functions which help control and synchronize when processes are executed.

The [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function takes a base model and the `peft_config` you prepared earlier to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel):

```diff
  model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
+ model = get_peft_model(model, peft_config)
```

Pass all the relevant training objects to 🤗 Accelerate's [prepare](https://huggingface.co/docs/accelerate/v1.11.0/en/package_reference/accelerator#accelerate.Accelerator.prepare) which makes sure everything is ready for training:

```py
model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
    model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
)
```

The next bit of code checks whether the DeepSpeed plugin is used in the `Accelerator`, and if the plugin exists, then we check if we are using ZeRO-3. This conditional flag is used when calling `generate` function call during inference for syncing GPUs when the model parameters are sharded:

```py
is_ds_zero_3 = False
if getattr(accelerator.state, "deepspeed_plugin", None):
    is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
```

Inside the training loop, the usual `loss.backward()` is replaced by 🤗 Accelerate's [backward](https://huggingface.co/docs/accelerate/v1.11.0/en/package_reference/accelerator#accelerate.Accelerator.backward) which uses the correct `backward()` method based on your configuration:

```diff
  for epoch in range(num_epochs):
      with TorchTracemalloc() as tracemalloc:
          model.train()
          total_loss = 0
          for step, batch in enumerate(tqdm(train_dataloader)):
              outputs = model(**batch)
              loss = outputs.loss
              total_loss += loss.detach().float()
+             accelerator.backward(loss)
              optimizer.step()
              lr_scheduler.step()
              optimizer.zero_grad()
```

That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.

## Train

Run the following command to launch the training script. Earlier, you saved the configuration file to `ds_zero3_cpu.yaml`, so you'll need to pass the path to the launcher with the `--config_file` argument like this:

```bash
accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
```

You'll see some output logs that track memory usage during training, and once it's completed, the script returns the accuracy and compares the predictions to the labels:

```bash
GPU Memory before entering the train : 1916
GPU Memory consumed at the end of the train (end-begin): 66
GPU Peak Memory consumed during the train (max-begin): 7488
GPU Total Peak Memory consumed during the train (max): 9404
CPU Memory before entering the train : 19411
CPU Memory consumed at the end of the train (end-begin): 0
CPU Peak Memory consumed during the train (max-begin): 0
CPU Total Peak Memory consumed during the train (max): 19411
epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
100%|████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:27<00:00,  3.92s/it]
GPU Memory before entering the eval : 1982
GPU Memory consumed at the end of the eval (end-begin): -66
GPU Peak Memory consumed during the eval (max-begin): 672
GPU Total Peak Memory consumed during the eval (max): 2654
CPU Memory before entering the eval : 19411
CPU Memory consumed at the end of the eval (end-begin): 0
CPU Peak Memory consumed during the eval (max-begin): 0
CPU Total Peak Memory consumed during the eval (max): 19411
accuracy=100.0
eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
```

# Caveats
1. Merging when using PEFT and DeepSpeed is currently unsupported and will raise error.
2. When using CPU offloading, the major gains from using PEFT to shrink the optimizer states and gradients to that of the adapter weights would be realized on CPU RAM and there won't be savings with respect to GPU memory.
3. DeepSpeed Stage 3 and qlora when used with CPU offloading leads to more GPU memory usage when compared to disabling CPU offloading. 

> [!TIP]
> 💡 When you have code that requires merging (and unmerging) of weights, try to manually collect the parameters with DeepSpeed Zero-3 beforehand:
>
> ```python
> import deepspeed
>
> is_ds_zero_3 = ... # check if Zero-3
>
> with deepspeed.zero.GatheredParameters(list(model.parameters()), enabled= is_ds_zero_3):
>     model.merge_adapter()
>     # do whatever is needed, then unmerge in the same context if unmerging is required
>     ...
>     model.unmerge_adapter()
> ```


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/accelerate/deepspeed.md" />

### PEFT configurations and models
https://huggingface.co/docs/peft/v0.18.0.rc0/tutorial/peft_model_config.md

# PEFT configurations and models

The sheer size of today's large pretrained models - which commonly have billions of parameters - presents a significant training challenge because they require more storage space and more computational power to crunch all those calculations. You'll need access to powerful GPUs or TPUs to train these large pretrained models which is expensive, not widely accessible to everyone, not environmentally friendly, and not very practical. PEFT methods address many of these challenges. There are several types of PEFT methods (soft prompting, matrix decomposition, adapters), but they all focus on the same thing, reduce the number of trainable parameters. This makes it more accessible to train and store large models on consumer hardware.

The PEFT library is designed to help you quickly train large models on free or low-cost GPUs, and in this tutorial, you'll learn how to setup a configuration to apply a PEFT method to a pretrained base model for training. Once the PEFT configuration is setup, you can use any training framework you like (Transformer's [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) class, [Accelerate](https://hf.co/docs/accelerate), a custom PyTorch training loop).

## PEFT configurations

> [!TIP]
> Learn more about the parameters you can configure for each PEFT method in their respective API reference page.

A configuration stores important parameters that specify how a particular PEFT method should be applied.

For example, take a look at the following [`LoraConfig`](https://huggingface.co/ybelkada/opt-350m-lora/blob/main/adapter_config.json) for applying LoRA and [`PromptEncoderConfig`](https://huggingface.co/smangrul/roberta-large-peft-p-tuning/blob/main/adapter_config.json) for applying p-tuning (these configuration files are already JSON-serialized). Whenever you load a PEFT adapter, it is a good idea to check whether it has an associated adapter_config.json file which is required.

<hfoptions id="config">
<hfoption id="LoraConfig">

```json
{
  "base_model_name_or_path": "facebook/opt-350m", #base model to apply LoRA to
  "bias": "none",
  "fan_in_fan_out": false,
  "inference_mode": true,
  "init_lora_weights": true,
  "layers_pattern": null,
  "layers_to_transform": null,
  "lora_alpha": 32,
  "lora_dropout": 0.05,
  "modules_to_save": null,
  "peft_type": "LORA", #PEFT method type
  "r": 16,
  "revision": null,
  "target_modules": [
    "q_proj", #model modules to apply LoRA to (query and value projection layers)
    "v_proj"
  ],
  "task_type": "CAUSAL_LM" #type of task to train model on
}
```

You can create your own configuration for training by initializing a [LoraConfig](/docs/peft/v0.18.0.rc0/en/package_reference/lora#peft.LoraConfig).

```py
from peft import LoraConfig, TaskType

lora_config = LoraConfig(
    r=16,
    target_modules=["q_proj", "v_proj"],
    task_type=TaskType.CAUSAL_LM,
    lora_alpha=32,
    lora_dropout=0.05
)
```

</hfoption>
<hfoption id="PromptEncoderConfig">

```json
{
  "base_model_name_or_path": "roberta-large", #base model to apply p-tuning to
  "encoder_dropout": 0.0,
  "encoder_hidden_size": 128,
  "encoder_num_layers": 2,
  "encoder_reparameterization_type": "MLP",
  "inference_mode": true,
  "num_attention_heads": 16,
  "num_layers": 24,
  "num_transformer_submodules": 1,
  "num_virtual_tokens": 20,
  "peft_type": "P_TUNING", #PEFT method type
  "task_type": "SEQ_CLS", #type of task to train model on
  "token_dim": 1024
}
```

You can create your own configuration for training by initializing a [PromptEncoderConfig](/docs/peft/v0.18.0.rc0/en/package_reference/p_tuning#peft.PromptEncoderConfig).

```py
from peft import PromptEncoderConfig, TaskType

p_tuning_config = PromptEncoderConfig(
    encoder_reparameterization_type="MLP",
    encoder_hidden_size=128,
    num_attention_heads=16,
    num_layers=24,
    num_transformer_submodules=1,
    num_virtual_tokens=20,
    token_dim=1024,
    task_type=TaskType.SEQ_CLS
)
```

</hfoption>
</hfoptions>

## PEFT models

With a PEFT configuration in hand, you can now apply it to any pretrained model to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel). Choose from any of the state-of-the-art models from the [Transformers](https://hf.co/docs/transformers) library, a custom model, and even new and unsupported transformer architectures.

For this tutorial, load a base [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model to finetune.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
```

Use the [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) function to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) from the base facebook/opt-350m model and the `lora_config` you created earlier.

```py
from peft import get_peft_model

lora_model = get_peft_model(model, lora_config)
lora_model.print_trainable_parameters()
"trainable params: 1,572,864 || all params: 332,769,280 || trainable%: 0.472659014678278"
```

> [!WARNING]
> When calling [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model), the base model will be modified *in-place*. That means, when calling [get_peft_model()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.get_peft_model) on a model that was already modified in the same way before, this model will be further mutated. Therefore, if you would like to modify your PEFT configuration after having called `get_peft_model()` before, you would first have to unload the model with [unload()](/docs/peft/v0.18.0.rc0/en/package_reference/tuners#peft.tuners.tuners_utils.BaseTuner.unload) and then call `get_peft_model()` with your new configuration. Alternatively, you can re-initialize the model to ensure a fresh, unmodified state before applying a new PEFT configuration.

Now you can train the [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) with your preferred training framework! After training, you can save your model locally with [save_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.save_pretrained) or upload it to the Hub with the [push_to_hub](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/model#transformers.PreTrainedModel.push_to_hub) method.

```py
# save locally
lora_model.save_pretrained("your-name/opt-350m-lora")

# push to Hub
lora_model.push_to_hub("your-name/opt-350m-lora")
```

To load a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) for inference, you'll need to provide the [PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig) used to create it and the base model it was trained from.

```py
from peft import PeftModel, PeftConfig

config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora")
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora")
```

> [!TIP]
> By default, the [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) is set for inference, but if you'd like to train the adapter some more you can set `is_trainable=True`.
>
> ```py
> lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora", is_trainable=True)
> ```

The [PeftModel.from_pretrained()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.from_pretrained) method is the most flexible way to load a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) because it doesn't matter what model framework was used (Transformers, timm, a generic PyTorch model). Other classes, like [AutoPeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel), are just a convenient wrapper around the base [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel), and makes it easier to load PEFT models directly from the Hub or locally where the PEFT weights are stored.

```py
from peft import AutoPeftModelForCausalLM

lora_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
```

Take a look at the [AutoPeftModel](package_reference/auto_class) API reference to learn more about the [AutoPeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/auto_class#peft.AutoPeftModel) classes.

## Next steps

With the appropriate [PeftConfig](/docs/peft/v0.18.0.rc0/en/package_reference/config#peft.PeftConfig), you can apply it to any pretrained model to create a [PeftModel](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel) and train large powerful models faster on freely available GPUs! To learn more about PEFT configurations and models, the following guide may be helpful:

* Learn how to configure a PEFT method for models that aren't from Transformers in the [Working with custom models](../developer_guides/custom_models) guide.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/tutorial/peft_model_config.md" />

### PEFT integrations
https://huggingface.co/docs/peft/v0.18.0.rc0/tutorial/peft_integrations.md

# PEFT integrations

PEFT's practical benefits extends to other Hugging Face libraries like [Diffusers](https://hf.co/docs/diffusers) and [Transformers](https://hf.co/docs/transformers). One of the main benefits of PEFT is that an adapter file generated by a PEFT method is a lot smaller than the original model, which makes it super easy to manage and use multiple adapters. You can use one pretrained base model for multiple tasks by simply loading a new adapter finetuned for the task you're solving. Or you can combine multiple adapters with a text-to-image diffusion model to create new effects.

This tutorial will show you how PEFT can help you manage adapters in Diffusers and Transformers.

## Diffusers

Diffusers is a generative AI library for creating images and videos from text or images with diffusion models. LoRA is an especially popular training method for diffusion models because you can very quickly train and share diffusion models to generate images in new styles. To make it easier to use and try multiple LoRA models, Diffusers uses the PEFT library to help manage different adapters for inference.

For example, load a base model and then load the [artificialguybr/3DRedmond-V1](https://huggingface.co/artificialguybr/3DRedmond-V1) adapter for inference with the [`load_lora_weights`](https://huggingface.co/docs/diffusers/v0.24.0/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights) method. The `adapter_name` argument in the loading method is enabled by PEFT and allows you to set a name for the adapter so it is easier to reference.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "peft-internal-testing/artificialguybr__3DRedmond-V1", 
    weight_name="3DRedmond-3DRenderStyle-3DRenderAF.safetensors", 
    adapter_name="3d"
)
image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers.png"/>
</div>

Now let's try another cool LoRA model, [ostris/super-cereal-sdxl-lora](https://huggingface.co/ostris/super-cereal-sdxl-lora). All you need to do is load and name this new adapter with `adapter_name`, and use the [`set_adapters`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters) method to set it as the currently active adapter.

```py
pipeline.load_lora_weights(
    "ostris/super-cereal-sdxl-lora", 
    weight_name="cereal_box_sdxl_v1.safetensors", 
    adapter_name="cereal"
)
pipeline.set_adapters("cereal")
image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers-2.png"/>
</div>

Finally, you can call the [`disable_lora`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora) method to restore the base model.

```py
pipeline.disable_lora()
```

Learn more about how PEFT supports Diffusers in the [Inference with PEFT](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference) tutorial.

## Transformers

🤗 [Transformers](https://hf.co/docs/transformers) is a collection of pretrained models for all types of tasks in all modalities. You can load these models for training or inference. Many of the models are large language models (LLMs), so it makes sense to integrate PEFT with Transformers to manage and train adapters.

Load a base pretrained model to train.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
```

Next, add an adapter configuration to specify how to adapt the model parameters. Call the [add_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.add_adapter) method to add the configuration to the base model.

```py
from peft import LoraConfig

peft_config = LoraConfig(
    lora_alpha=16,
    lora_dropout=0.1,
    r=64,
    bias="none",
    task_type="CAUSAL_LM"
)
model.add_adapter(peft_config)
```

Now you can train the model with Transformer's [Trainer](https://huggingface.co/docs/transformers/v4.57.1/en/main_classes/trainer#transformers.Trainer) class or whichever training framework you prefer.

To use the newly trained model for inference, the [AutoModel](https://huggingface.co/docs/transformers/v4.57.1/en/model_doc/auto#transformers.AutoModel) class uses PEFT on the backend to load the adapter weights and configuration file into a base pretrained model.

```py
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("peft-internal-testing/opt-350m-lora")
```

Alternatively, you can use transformers [Pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines) to load the model for conveniently running inference:

```py
from transformers import pipeline

model = pipeline("text-generation", "peft-internal-testing/opt-350m-lora")
print(model("Hello World"))
```

If you're interested in comparing or using more than one adapter, you can call the [add_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.add_adapter) method to add the adapter configuration to the base model. The only requirement is the adapter type must be the same (you can't mix a LoRA and LoHa adapter).

```py
from transformers import AutoModelForCausalLM
from peft import LoraConfig

model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
model.add_adapter(lora_config_1, adapter_name="adapter_1")
```

Call [add_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.add_adapter) again to attach a new adapter to the base model.

```py
model.add_adapter(lora_config_2, adapter_name="adapter_2")
```

Then you can use [set_adapter()](/docs/peft/v0.18.0.rc0/en/package_reference/peft_model#peft.PeftModel.set_adapter) to set the currently active adapter.

```py
model.set_adapter("adapter_1")
output = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
```

To disable the adapter, call the [disable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L313) method.

```py
model.disable_adapters()
```

The [enable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L336) can be used to enable the adapters again.

If you're curious, check out the [Load and train adapters with PEFT](https://huggingface.co/docs/transformers/main/peft) tutorial to learn more.


<EditOnGithub source="https://github.com/huggingface/peft/blob/main/docs/source/tutorial/peft_integrations.md" />
