C3A is a parameter-efficient fine-tuning technique that leverages Circular Convolution to achieve high rank adaptation within reasonable resource limits.
Note that you should use a much larger learning rate (LR) for C3A than for other methods. For example, a LR of 1e-1 for C3A is a good starting point. Besides, a much smaller weight decay should be used. You can refer to the method_comparison folder for more details.
For the block_size, it affects tunable parameters and performance. To start with, you can choose a near $\frac{\sqrt{d_1\times d_2}}{r}$, where is the rank for LoRA you would use for this task.
C3A currently has the following constraints:
nn.Linear layers are supported.If these constraints don’t work for your use case, consider other methods instead.
The abstract from the paper is:
Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices and to represent weight changes (i.e., $\Delta \mathbf{W} = \mathbf{B} \mathbf{A}$). This method reduces trainable parameters and mitigates heavy memory consumption associated with full delta matrices by sequentially multiplying and with the activation. Despite its success, the intrinsic low-rank characteristic may limit its performance. Although several variants have been proposed to address this issue, they often overlook the crucial computational and memory efficiency brought by LoRA. In this paper, we propose Circular Convolution Adaptation (C3A), which not only achieves high-rank adaptation with enhanced performance but also excels in both computational power and memory utilization. Extensive experiments demonstrate that C3A consistently outperforms LoRA and its variants across various fine-tuning tasks.
( task_type: Optional[Union[str, TaskType]] = None peft_type: Optional[Union[str, PeftType]] = None auto_mapping: Optional[dict] = None peft_version: Optional[str] = None base_model_name_or_path: Optional[str] = None revision: Optional[str] = None inference_mode: bool = False block_size: int = 256 target_modules: Optional[Union[list[str], str]] = None bias: str = 'none' modules_to_save: Optional[list[str]] = None layers_to_transform: Optional[Union[list[int], int]] = None layers_pattern: Optional[Union[list[str], str]] = None block_size_pattern: Optional[dict] = <factory> init_weights: Optional[Union[bool, Literal['gaussian', 'kaiming_uniform', 'xavier_uniform']]] = 'xavier_uniform' )
Parameters
int) —
block size for C3A, must be divisible by both the input size and the output size of the target layer. If
you have no idea what block_size you should use, set it to the greatest common divisor of all input &
output sizes of your target layers. Increasing this would result in less parameters. Union[list[str],str]) — The names of the modules to apply C3A to. str) — Bias type for C3A. Can be ‘none’, ‘all’ or ‘c3a_only’. If ‘all’ or ‘c3a_only’, the
corresponding biases will be updated during training. Be aware that this means that, even when disabling
the adapters, the model will not produce the same output as the base model would have without adaptation. list[str]) —list of modules apart from C3A layers to be set as trainable
and saved in the final checkpoint. Union[list[int],int]) —
The layer indexes to transform, if this argument is specified, it will apply C3A on the layer indexes that
are specified in this list. If a single integer is passed, it will apply C3A on the layer at this index. str) —
The layer pattern name, used only if layers_to_transform is different from None and if the layer
pattern is not in the common layers pattern. dict) —
The mapping from layer names or regexp expression to block_size which are different from the default
specified. For example, {"model.decoder.layers.0.encoder_attn.k_proj": 1280} Union[bool, Literal["gaussian", "kaiming_uniform", "xavier_uniform"]]) —
Defaults to ‘xavier_uniform’. Setting this to False also uses ‘xavier_uniform’. To set the weights to
zeros (thus making C3A a no-op), set the value to True. This is the configuration class to store the configuration of a C3AModel.
( model peft_config: Union[PeftConfig, dict[str, PeftConfig]] adapter_name: str low_cpu_mem_usage: bool = False state_dict: Optional[dict[str, torch.Tensor]] = None ) → torch.nn.Module
Parameters
torch.nn.Module) — The model to be adapted. str) — The name of the adapter, defaults to "default". Returns
torch.nn.Module
The C3A model.
Creates C3A model from a pretrained transformers model.
The method is described in detail in https://huggingface.co/papers/2407.19342.
Attributes: