LyCORIS (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The LoHa and LoKr methods inherit from the Lycoris classes here.
( task_type: Optional[Union[str, TaskType]] = None peft_type: Optional[Union[str, PeftType]] = None auto_mapping: Optional[dict] = None peft_version: Optional[str] = None base_model_name_or_path: Optional[str] = None revision: Optional[str] = None inference_mode: bool = False rank_pattern: Optional[dict] = <factory> alpha_pattern: Optional[dict] = <factory> )
A base config for LyCORIS like adapters
A base layer for LyCORIS like adapters
( safe_merge: bool = False adapter_names: Optional[list[str]] = None )
Parameters
bool, optional) —
If True, the merge operation will be performed in a copy of the original weights and check for NaNs
before merging the weights. This is useful if you want to check if the merge operation will produce
NaNs. Defaults to False. List[str], optional) —
The list of adapter names that should be merged. If None, all active adapters will be merged.
Defaults to None. Merge the active adapter weights into the base weights
This method unmerges all merged adapter layers from the base weights.
( model peft_config: Union[PeftConfig, dict[str, PeftConfig]] adapter_name: str low_cpu_mem_usage: bool = False state_dict: Optional[dict[str, torch.Tensor]] = None )
Parameters
torch.nn.Module) — The model to be adapted. str) — The name of the adapter, defaults to "default". bool, optional, defaults to False) —
Create empty adapter weights on meta device. Useful to speed up the loading process. A base tuner for LyCORIS like adapters