# Diffusers

## Docs

- [Installation](https://huggingface.co/docs/diffusers/main/installation.md)
- [Quickstart](https://huggingface.co/docs/diffusers/main/quicktour.md)
- [Community Projects](https://huggingface.co/docs/diffusers/main/community_projects.md)
- [Basic performance](https://huggingface.co/docs/diffusers/main/stable_diffusion.md)
- [Diffusers](https://huggingface.co/docs/diffusers/main/index.md)
- [Outpainting](https://huggingface.co/docs/diffusers/main/advanced_inference/outpaint.md)
- [Overview](https://huggingface.co/docs/diffusers/main/modular_diffusers/overview.md)
- [ModularPipeline](https://huggingface.co/docs/diffusers/main/modular_diffusers/modular_pipeline.md)
- [ModularPipelineBlocks](https://huggingface.co/docs/diffusers/main/modular_diffusers/pipeline_block.md)
- [AutoPipelineBlocks](https://huggingface.co/docs/diffusers/main/modular_diffusers/auto_pipeline_blocks.md)
- [SequentialPipelineBlocks](https://huggingface.co/docs/diffusers/main/modular_diffusers/sequential_pipeline_blocks.md)
- [LoopSequentialPipelineBlocks](https://huggingface.co/docs/diffusers/main/modular_diffusers/loop_sequential_pipeline_blocks.md)
- [ComponentsManager](https://huggingface.co/docs/diffusers/main/modular_diffusers/components_manager.md)
- [Quickstart](https://huggingface.co/docs/diffusers/main/modular_diffusers/quickstart.md)
- [Guiders](https://huggingface.co/docs/diffusers/main/modular_diffusers/guiders.md)
- [States](https://huggingface.co/docs/diffusers/main/modular_diffusers/modular_diffusers_states.md)
- [Normalization layers](https://huggingface.co/docs/diffusers/main/api/normalization.md)
- [Configuration](https://huggingface.co/docs/diffusers/main/api/configuration.md)
- [Overview](https://huggingface.co/docs/diffusers/main/api/internal_classes_overview.md)
- [Logging](https://huggingface.co/docs/diffusers/main/api/logging.md)
- [Quantization](https://huggingface.co/docs/diffusers/main/api/quantization.md)
- [Parallelism](https://huggingface.co/docs/diffusers/main/api/parallel.md)
- [Activation functions](https://huggingface.co/docs/diffusers/main/api/activations.md)
- [VAE Image Processor](https://huggingface.co/docs/diffusers/main/api/image_processor.md)
- [Utilities](https://huggingface.co/docs/diffusers/main/api/utilities.md)
- [Outputs](https://huggingface.co/docs/diffusers/main/api/outputs.md)
- [Attention Processor](https://huggingface.co/docs/diffusers/main/api/attnprocessor.md)
- [Video Processor](https://huggingface.co/docs/diffusers/main/api/video_processor.md)
- [Caching methods](https://huggingface.co/docs/diffusers/main/api/cache.md)
- [Attend-and-Excite](https://huggingface.co/docs/diffusers/main/api/pipelines/attend_and_excite.md)
- [Value-guided planning](https://huggingface.co/docs/diffusers/main/api/pipelines/value_guided_sampling.md)
- [DeepFloyd IF](https://huggingface.co/docs/diffusers/main/api/pipelines/deepfloyd_if.md)
- [PixArt-Σ](https://huggingface.co/docs/diffusers/main/api/pipelines/pixart_sigma.md)
- [Latent Consistency Models](https://huggingface.co/docs/diffusers/main/api/pipelines/latent_consistency_models.md)
- [Dance Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/dance_diffusion.md)
- [Sana Sprint](https://huggingface.co/docs/diffusers/main/api/pipelines/sana_sprint.md)
- [AuraFlow](https://huggingface.co/docs/diffusers/main/api/pipelines/aura_flow.md)
- [AutoPipeline](https://huggingface.co/docs/diffusers/main/api/pipelines/auto_pipeline.md)
- [Lumina-T2X](https://huggingface.co/docs/diffusers/main/api/pipelines/lumina.md)
- [Qwenimage](https://huggingface.co/docs/diffusers/main/api/pipelines/qwenimage.md)
- [Latte](https://huggingface.co/docs/diffusers/main/api/pipelines/latte.md)
- [Self-Attention Guidance](https://huggingface.co/docs/diffusers/main/api/pipelines/self_attention_guidance.md)
- [unCLIP](https://huggingface.co/docs/diffusers/main/api/pipelines/unclip.md)
- [DiT](https://huggingface.co/docs/diffusers/main/api/pipelines/dit.md)
- [aMUSEd](https://huggingface.co/docs/diffusers/main/api/pipelines/amused.md)
- [Stable Audio](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_audio.md)
- [Pipelines](https://huggingface.co/docs/diffusers/main/api/pipelines/overview.md)
- [Kandinsky 2.1](https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky.md)
- [MusicLDM](https://huggingface.co/docs/diffusers/main/api/pipelines/musicldm.md)
- [DDIM](https://huggingface.co/docs/diffusers/main/api/pipelines/ddim.md)
- [Text-to-video](https://huggingface.co/docs/diffusers/main/api/pipelines/text_to_video.md)
- [Latent Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/latent_diffusion.md)
- [MultiDiffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/panorama.md)
- [Hunyuan Video](https://huggingface.co/docs/diffusers/main/api/pipelines/hunyuan_video.md)
- [Consisid](https://huggingface.co/docs/diffusers/main/api/pipelines/consisid.md)
- [ControlNet-XS](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnetxs.md)
- [ControlNet with Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sdxl.md)
- [Image-to-Video Generation with PIA (Personalized Image Animator)](https://huggingface.co/docs/diffusers/main/api/pipelines/pia.md)
- [Stable Cascade](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_cascade.md)
- [Easyanimate](https://huggingface.co/docs/diffusers/main/api/pipelines/easyanimate.md)
- [Kandinsky 2.2](https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky_v22.md)
- [Perturbed-Attention Guidance](https://huggingface.co/docs/diffusers/main/api/pipelines/pag.md)
- [Chroma](https://huggingface.co/docs/diffusers/main/api/pipelines/chroma.md)
- [AudioLDM](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm.md)
- [DiffEdit](https://huggingface.co/docs/diffusers/main/api/pipelines/diffedit.md)
- [ControlNet with Hunyuan-DiT](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_hunyuandit.md)
- [LEDITS++](https://huggingface.co/docs/diffusers/main/api/pipelines/ledits_pp.md)
- [Cosmos](https://huggingface.co/docs/diffusers/main/api/pipelines/cosmos.md)
- [ControlNet](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sana.md)
- [Shap-E](https://huggingface.co/docs/diffusers/main/api/pipelines/shap_e.md)
- [Cogview4](https://huggingface.co/docs/diffusers/main/api/pipelines/cogview4.md)
- [Cogvideox](https://huggingface.co/docs/diffusers/main/api/pipelines/cogvideox.md)
- [InstructPix2Pix](https://huggingface.co/docs/diffusers/main/api/pipelines/pix2pix.md)
- [FluxControlInpaint](https://huggingface.co/docs/diffusers/main/api/pipelines/control_flux_inpaint.md)
- [Marigold Computer Vision](https://huggingface.co/docs/diffusers/main/api/pipelines/marigold.md)
- [Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis](https://huggingface.co/docs/diffusers/main/api/pipelines/kolors.md)
- [Flux](https://huggingface.co/docs/diffusers/main/api/pipelines/flux.md)
- [Allegro](https://huggingface.co/docs/diffusers/main/api/pipelines/allegro.md)
- [Consistency Models](https://huggingface.co/docs/diffusers/main/api/pipelines/consistency_models.md)
- [Stable unCLIP](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_unclip.md)
- [ControlNetUnion](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_union.md)
- [UniDiffuser](https://huggingface.co/docs/diffusers/main/api/pipelines/unidiffuser.md)
- [AudioLDM 2](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2.md)
- [Cogview3](https://huggingface.co/docs/diffusers/main/api/pipelines/cogview3.md)
- [ControlNet-XS with Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnetxs_sdxl.md)
- [Framepack](https://huggingface.co/docs/diffusers/main/api/pipelines/framepack.md)
- [Text-to-Video Generation with AnimateDiff](https://huggingface.co/docs/diffusers/main/api/pipelines/animatediff.md)
- [Paint by Example](https://huggingface.co/docs/diffusers/main/api/pipelines/paint_by_example.md)
- [ControlNet with Flux.1](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_flux.md)
- [Semantic Guidance](https://huggingface.co/docs/diffusers/main/api/pipelines/semantic_stable_diffusion.md)
- [Kandinsky 3](https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky3.md)
- [Ltx Video](https://huggingface.co/docs/diffusers/main/api/pipelines/ltx_video.md)
- [Mochi](https://huggingface.co/docs/diffusers/main/api/pipelines/mochi.md)
- [Bria 3.2](https://huggingface.co/docs/diffusers/main/api/pipelines/bria_3_2.md)
- [Würstchen](https://huggingface.co/docs/diffusers/main/api/pipelines/wuerstchen.md)
- [Text2Video-Zero](https://huggingface.co/docs/diffusers/main/api/pipelines/text_to_video_zero.md)
- [Hunyuan-DiT](https://huggingface.co/docs/diffusers/main/api/pipelines/hunyuandit.md)
- [ControlNet](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet.md)
- [PixArt-α](https://huggingface.co/docs/diffusers/main/api/pipelines/pixart.md)
- [Visualcloze](https://huggingface.co/docs/diffusers/main/api/pipelines/visualcloze.md)
- [I2VGen-XL](https://huggingface.co/docs/diffusers/main/api/pipelines/i2vgenxl.md)
- [Skyreels V2](https://huggingface.co/docs/diffusers/main/api/pipelines/skyreels_v2.md)
- [Hidream](https://huggingface.co/docs/diffusers/main/api/pipelines/hidream.md)
- [Sana](https://huggingface.co/docs/diffusers/main/api/pipelines/sana.md)
- [DDPM](https://huggingface.co/docs/diffusers/main/api/pipelines/ddpm.md)
- [Lumina2](https://huggingface.co/docs/diffusers/main/api/pipelines/lumina2.md)
- [Wan](https://huggingface.co/docs/diffusers/main/api/pipelines/wan.md)
- [BLIP-Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/blip_diffusion.md)
- [Omnigen](https://huggingface.co/docs/diffusers/main/api/pipelines/omnigen.md)
- [ControlNet with Stable Diffusion 3](https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sd3.md)
- [Safe Stable Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_safe.md)
- [Image variation](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/image_variation.md)
- [Super-resolution](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/upscale.md)
- [Inpainting](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/inpaint.md)
- [Stable Diffusion pipelines](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/overview.md)
- [Stable Diffusion 3](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_3.md)
- [Text-to-image](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/text2img.md)
- [K-Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/k_diffusion.md)
- [Latent upscaler](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/latent_upscale.md)
- [Image-to-image](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/img2img.md)
- [Stable Diffusion 2](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_2.md)
- [SDXL Turbo](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/sdxl_turbo.md)
- [Stable Video Diffusion](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/svd.md)
- [GLIGEN (Grounded Language-to-Image Generation)](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/gligen.md)
- [T2I-Adapter](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/adapter.md)
- [Text-to-(RGB, depth)](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/ldm3d_diffusion.md)
- [Depth-to-image](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/depth2img.md)
- [Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_xl.md)
- [Pipeline](https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline.md)
- [Components and configs](https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_components.md)
- [Pipeline states](https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_states.md)
- [Pipeline blocks](https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_blocks.md)
- [Guiders](https://huggingface.co/docs/diffusers/main/api/modular_diffusers/guiders.md)
- [ConsisIDTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/consisid_transformer3d.md)
- [AutoencoderOobleck](https://huggingface.co/docs/diffusers/main/api/models/autoencoder_oobleck.md)
- [EasyAnimateTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/easyanimate_transformer3d.md)
- [UNet1DModel](https://huggingface.co/docs/diffusers/main/api/models/unet.md)
- [AutoencoderKLMagvit](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_magvit.md)
- [WanTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/wan_transformer_3d.md)
- [AuraFlowTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/aura_flow_transformer2d.md)
- [LatteTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/latte_transformer3d.md)
- [QwenImageTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/qwenimage_transformer2d.md)
- [UNet2DModel](https://huggingface.co/docs/diffusers/main/api/models/unet2d.md)
- [StableCascadeUNet](https://huggingface.co/docs/diffusers/main/api/models/stable_cascade_unet.md)
- [CogVideoXTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/cogvideox_transformer3d.md)
- [Lumina2Transformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/lumina2_transformer2d.md)
- [AutoencoderKLAllegro](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_allegro.md)
- [VQModel](https://huggingface.co/docs/diffusers/main/api/models/vq.md)
- [Models](https://huggingface.co/docs/diffusers/main/api/models/overview.md)
- [HunyuanVideoTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/hunyuan_video_transformer_3d.md)
- [AutoencoderDC](https://huggingface.co/docs/diffusers/main/api/models/autoencoder_dc.md)
- [TransformerTemporalModel](https://huggingface.co/docs/diffusers/main/api/models/transformer_temporal.md)
- [SparseControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_sparsectrl.md)
- [PriorTransformer](https://huggingface.co/docs/diffusers/main/api/models/prior_transformer.md)
- [LTXVideoTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/ltx_video_transformer3d.md)
- [CosmosTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/cosmos_transformer3d.md)
- [CogView4Transformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/cogview4_transformer2d.md)
- [HunyuanDiT2DControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_hunyuandit.md)
- [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/hidream_image_transformer.md)
- [AutoencoderKLMochi](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_mochi.md)
- [UNet2DConditionModel](https://huggingface.co/docs/diffusers/main/api/models/unet2d-cond.md)
- [SanaControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_sana.md)
- [UNet3DConditionModel](https://huggingface.co/docs/diffusers/main/api/models/unet3d-cond.md)
- [AutoencoderKLCogVideoX](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_cogvideox.md)
- [MochiTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/mochi_transformer3d.md)
- [HunyuanDiT2DModel](https://huggingface.co/docs/diffusers/main/api/models/hunyuan_transformer2d.md)
- [ChromaTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/chroma_transformer.md)
- [AsymmetricAutoencoderKL](https://huggingface.co/docs/diffusers/main/api/models/asymmetricautoencoderkl.md)
- [ControlNetUnionModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_union.md)
- [AutoencoderKLCosmos](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_cosmos.md)
- [UVit2DModel](https://huggingface.co/docs/diffusers/main/api/models/uvit2d.md)
- [SanaTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/sana_transformer2d.md)
- [AutoencoderKLQwenImage](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_qwenimage.md)
- [DiTTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/dit_transformer2d.md)
- [Tiny AutoEncoder](https://huggingface.co/docs/diffusers/main/api/models/autoencoder_tiny.md)
- [AutoencoderKL](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl.md)
- [FluxControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_flux.md)
- [AutoencoderKLHunyuanVideo](https://huggingface.co/docs/diffusers/main/api/models/autoencoder_kl_hunyuan_video.md)
- [StableAudioDiTModel](https://huggingface.co/docs/diffusers/main/api/models/stable_audio_transformer.md)
- [PixArtTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/pixart_transformer2d.md)
- [SD3 Transformer Model](https://huggingface.co/docs/diffusers/main/api/models/sd3_transformer2d.md)
- [CogView3PlusTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/cogview3plus_transformer2d.md)
- [LuminaNextDiT2DModel](https://huggingface.co/docs/diffusers/main/api/models/lumina_nextdit2d.md)
- [ControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet.md)
- [OmniGenTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/omnigen_transformer.md)
- [AutoencoderKLLTXVideo](https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_ltx_video.md)
- [SkyReelsV2Transformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/skyreels_v2_transformer_3d.md)
- [FluxTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/flux_transformer.md)
- [AutoencoderKLWan](https://huggingface.co/docs/diffusers/main/api/models/autoencoder_kl_wan.md)
- [UNetMotionModel](https://huggingface.co/docs/diffusers/main/api/models/unet-motion.md)
- [AllegroTransformer3DModel](https://huggingface.co/docs/diffusers/main/api/models/allegro_transformer3d.md)
- [BriaTransformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/bria_transformer.md)
- [Transformer2DModel](https://huggingface.co/docs/diffusers/main/api/models/transformer2d.md)
- [SD3ControlNetModel](https://huggingface.co/docs/diffusers/main/api/models/controlnet_sd3.md)
- [Consistency Decoder](https://huggingface.co/docs/diffusers/main/api/models/consistency_decoder_vae.md)
- [AutoModel](https://huggingface.co/docs/diffusers/main/api/models/auto_model.md)
- [PNDMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/pndm.md)
- [RePaintScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/repaint.md)
- [Schedulers](https://huggingface.co/docs/diffusers/main/api/schedulers/overview.md)
- [DDIMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/ddim.md)
- [UniPCMultistepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/unipc.md)
- [IPNDMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/ipndm.md)
- [Latent Consistency Model Multistep Scheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/lcm.md)
- [DPMSolverMultistepInverse](https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver_inverse.md)
- [VQDiffusionScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/vq_diffusion.md)
- [CMStochasticIterativeScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/cm_stochastic_iterative.md)
- [ScoreSdeVpScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/score_sde_vp.md)
- [FlowMatchEulerDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/flow_match_euler_discrete.md)
- [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/euler.md)
- [DPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver.md)
- [HeunDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/heun.md)
- [EulerAncestralDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/euler_ancestral.md)
- [CogVideoXDPMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver_cogvideox.md)
- [EDMEulerScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/edm_euler.md)
- [EDMDPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/edm_multistep_dpm_solver.md)
- [CosineDPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/cosine_dpm.md)
- [KDPM2AncestralDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_discrete_ancestral.md)
- [DEISMultistepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/deis.md)
- [KDPM2DiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_discrete.md)
- [FlowMatchHeunDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/flow_match_heun_discrete.md)
- [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/lms_discrete.md)
- [KarrasVeScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/stochastic_karras_ve.md)
- [ScoreSdeVeScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/score_sde_ve.md)
- [CogVideoXDDIMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/ddim_cogvideox.md)
- [DDIMInverseScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/ddim_inverse.md)
- [DDPMScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/ddpm.md)
- [TCDScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/tcd.md)
- [DPMSolverSDEScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_sde.md)
- [ConsistencyDecoderScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/consistency_decoder.md)
- [DPMSolverSinglestepScheduler](https://huggingface.co/docs/diffusers/main/api/schedulers/singlestep_dpm_solver.md)
- [IP-Adapter](https://huggingface.co/docs/diffusers/main/api/loaders/ip_adapter.md)
- [UNet](https://huggingface.co/docs/diffusers/main/api/loaders/unet.md)
- [LoRA](https://huggingface.co/docs/diffusers/main/api/loaders/lora.md)
- [Textual Inversion](https://huggingface.co/docs/diffusers/main/api/loaders/textual_inversion.md)
- [PEFT](https://huggingface.co/docs/diffusers/main/api/loaders/peft.md)
- [SD3Transformer2D](https://huggingface.co/docs/diffusers/main/api/loaders/transformer_sd3.md)
- [Single files](https://huggingface.co/docs/diffusers/main/api/loaders/single_file.md)
- [DreamBooth](https://huggingface.co/docs/diffusers/main/training/dreambooth.md)
- [LoRA](https://huggingface.co/docs/diffusers/main/training/lora.md)
- [Distributed inference](https://huggingface.co/docs/diffusers/main/training/distributed_inference.md)
- [Overview](https://huggingface.co/docs/diffusers/main/training/overview.md)
- [Kandinsky 2.2](https://huggingface.co/docs/diffusers/main/training/kandinsky.md)
- [Text-to-image](https://huggingface.co/docs/diffusers/main/training/text2image.md)
- [Unconditional image generation](https://huggingface.co/docs/diffusers/main/training/unconditional_training.md)
- [Latent Consistency Distillation](https://huggingface.co/docs/diffusers/main/training/lcm_distill.md)
- [Custom Diffusion](https://huggingface.co/docs/diffusers/main/training/custom_diffusion.md)
- [CogVideoX](https://huggingface.co/docs/diffusers/main/training/cogvideox.md)
- [Reinforcement learning training with DDPO](https://huggingface.co/docs/diffusers/main/training/ddpo.md)
- [Create a dataset for training](https://huggingface.co/docs/diffusers/main/training/create_dataset.md)
- [InstructPix2Pix](https://huggingface.co/docs/diffusers/main/training/instructpix2pix.md)
- [Wuerstchen](https://huggingface.co/docs/diffusers/main/training/wuerstchen.md)
- [Textual Inversion](https://huggingface.co/docs/diffusers/main/training/text_inversion.md)
- [T2I-Adapter](https://huggingface.co/docs/diffusers/main/training/t2i_adapters.md)
- [Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/training/sdxl.md)
- [ControlNet](https://huggingface.co/docs/diffusers/main/training/controlnet.md)
- [Adapt a model to a new task](https://huggingface.co/docs/diffusers/main/training/adapt_a_model.md)
- [Hybrid Inference](https://huggingface.co/docs/diffusers/main/hybrid_inference/overview.md)
- [Getting Started: VAE Encode with Hybrid Inference](https://huggingface.co/docs/diffusers/main/hybrid_inference/vae_encode.md)
- [Getting Started: VAE Decode with Hybrid Inference](https://huggingface.co/docs/diffusers/main/hybrid_inference/vae_decode.md)
- [Hybrid Inference API Reference](https://huggingface.co/docs/diffusers/main/hybrid_inference/api_reference.md)
- [Getting started](https://huggingface.co/docs/diffusers/main/quantization/overview.md)
- [bitsandbytes](https://huggingface.co/docs/diffusers/main/quantization/bitsandbytes.md)
- [Quanto](https://huggingface.co/docs/diffusers/main/quantization/quanto.md)
- [NVIDIA ModelOpt](https://huggingface.co/docs/diffusers/main/quantization/modelopt.md)
- [GGUF](https://huggingface.co/docs/diffusers/main/quantization/gguf.md)
- [torchao](https://huggingface.co/docs/diffusers/main/quantization/torchao.md)
- [DreamBooth](https://huggingface.co/docs/diffusers/main/using-diffusers/dreambooth.md)
- [IP-Adapter](https://huggingface.co/docs/diffusers/main/using-diffusers/ip_adapter.md)
- [Pipeline callbacks](https://huggingface.co/docs/diffusers/main/using-diffusers/callback.md)
- [Schedulers](https://huggingface.co/docs/diffusers/main/using-diffusers/schedulers.md)
- [Prompting](https://huggingface.co/docs/diffusers/main/using-diffusers/weighted_prompts.md)
- [Inpainting](https://huggingface.co/docs/diffusers/main/using-diffusers/inpaint.md)
- [Kandinsky](https://huggingface.co/docs/diffusers/main/using-diffusers/kandinsky.md)
- [ConsisID](https://huggingface.co/docs/diffusers/main/using-diffusers/consisid.md)
- [Create a server](https://huggingface.co/docs/diffusers/main/using-diffusers/create_a_server.md)
- [Controlled generation](https://huggingface.co/docs/diffusers/main/using-diffusers/controlling_generation.md)
- [Understanding pipelines, models and schedulers](https://huggingface.co/docs/diffusers/main/using-diffusers/write_own_pipeline.md)
- [Perturbed-Attention Guidance](https://huggingface.co/docs/diffusers/main/using-diffusers/pag.md)
- [DiffEdit](https://huggingface.co/docs/diffusers/main/using-diffusers/diffedit.md)
- [Marigold Computer Vision](https://huggingface.co/docs/diffusers/main/using-diffusers/marigold_usage.md)
- [Model formats](https://huggingface.co/docs/diffusers/main/using-diffusers/other-formats.md)
- [Sharing pipelines and models](https://huggingface.co/docs/diffusers/main/using-diffusers/push_to_hub.md)
- [Text-to-image](https://huggingface.co/docs/diffusers/main/using-diffusers/conditional_image_generation.md)
- [Image-to-image](https://huggingface.co/docs/diffusers/main/using-diffusers/img2img.md)
- [Reproducibility](https://huggingface.co/docs/diffusers/main/using-diffusers/reusing_seeds.md)
- [Stable Diffusion XL Turbo](https://huggingface.co/docs/diffusers/main/using-diffusers/sdxl_turbo.md)
- [Stable Video Diffusion](https://huggingface.co/docs/diffusers/main/using-diffusers/svd.md)
- [T2I-Adapter](https://huggingface.co/docs/diffusers/main/using-diffusers/t2i_adapter.md)
- [Batch inference](https://huggingface.co/docs/diffusers/main/using-diffusers/batched_inference.md)
- [Community pipelines and components](https://huggingface.co/docs/diffusers/main/using-diffusers/custom_pipeline_overview.md)
- [Unconditional image generation](https://huggingface.co/docs/diffusers/main/using-diffusers/unconditional_image_generation.md)
- [Video generation](https://huggingface.co/docs/diffusers/main/using-diffusers/text-img2vid.md)
- [FreeU](https://huggingface.co/docs/diffusers/main/using-diffusers/image_quality.md)
- [Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/using-diffusers/sdxl.md)
- [DiffusionPipeline](https://huggingface.co/docs/diffusers/main/using-diffusers/loading.md)
- [Shap-E](https://huggingface.co/docs/diffusers/main/using-diffusers/shap-e.md)
- [ControlNet](https://huggingface.co/docs/diffusers/main/using-diffusers/controlnet.md)
- [Latent Consistency Model](https://huggingface.co/docs/diffusers/main/using-diffusers/inference_with_lcm.md)
- [Trajectory Consistency Distillation-LoRA](https://huggingface.co/docs/diffusers/main/using-diffusers/inference_with_tcd_lora.md)
- [Text-guided depth-to-image generation](https://huggingface.co/docs/diffusers/main/using-diffusers/depth2img.md)
- [OmniGen](https://huggingface.co/docs/diffusers/main/using-diffusers/omnigen.md)
- [Textual Inversion](https://huggingface.co/docs/diffusers/main/using-diffusers/textual_inversion_inference.md)
- [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/conceptual/evaluation.md)
- [Philosophy](https://huggingface.co/docs/diffusers/main/conceptual/philosophy.md)
- [How to contribute to Diffusers 🧨](https://huggingface.co/docs/diffusers/main/conceptual/contribution.md)
- [🧨 Diffusers’ Ethical Guidelines](https://huggingface.co/docs/diffusers/main/conceptual/ethical_guidelines.md)
- [LoRA](https://huggingface.co/docs/diffusers/main/tutorials/using_peft_for_inference.md)
- [AutoPipeline](https://huggingface.co/docs/diffusers/main/tutorials/autopipeline.md)
- [Train a diffusion model](https://huggingface.co/docs/diffusers/main/tutorials/basic_training.md)
- [Intel Gaudi](https://huggingface.co/docs/diffusers/main/optimization/habana.md)
- [Token merging](https://huggingface.co/docs/diffusers/main/optimization/tome.md)
- [xDiT](https://huggingface.co/docs/diffusers/main/optimization/xdit.md)
- [DeepCache](https://huggingface.co/docs/diffusers/main/optimization/deepcache.md)
- [xFormers](https://huggingface.co/docs/diffusers/main/optimization/xformers.md)
- [Accelerate inference](https://huggingface.co/docs/diffusers/main/optimization/fp16.md)
- [Pruna](https://huggingface.co/docs/diffusers/main/optimization/pruna.md)
- [How to run Stable Diffusion with Core ML](https://huggingface.co/docs/diffusers/main/optimization/coreml.md)
- [T-GATE](https://huggingface.co/docs/diffusers/main/optimization/tgate.md)
- [CacheDiT](https://huggingface.co/docs/diffusers/main/optimization/cache_dit.md)
- [Compiling and offloading quantized models](https://huggingface.co/docs/diffusers/main/optimization/speed-memory-optims.md)
- [Attention backends](https://huggingface.co/docs/diffusers/main/optimization/attention_backends.md)
- [ParaAttention](https://huggingface.co/docs/diffusers/main/optimization/para_attn.md)
- [Metal Performance Shaders (MPS)](https://huggingface.co/docs/diffusers/main/optimization/mps.md)
- [AWS Neuron](https://huggingface.co/docs/diffusers/main/optimization/neuron.md)
- [ONNX Runtime](https://huggingface.co/docs/diffusers/main/optimization/onnx.md)
- [Caching](https://huggingface.co/docs/diffusers/main/optimization/cache.md)
- [Reduce memory usage](https://huggingface.co/docs/diffusers/main/optimization/memory.md)
- [OpenVINO](https://huggingface.co/docs/diffusers/main/optimization/open_vino.md)

### Installation
https://huggingface.co/docs/diffusers/main/installation.md

# Installation

Diffusers is tested on Python 3.8+ and PyTorch 1.4+. Install [PyTorch](https://pytorch.org/get-started/locally/) according to your system and setup.

Create a [virtual environment](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) for easier management of separate projects and to avoid compatibility issues between dependencies. Use [uv](https://docs.astral.sh/uv/), a Rust-based Python package and project manager, to create a virtual environment and install Diffusers.

```bash
uv venv my-env
source my-env/bin/activate
```

Install Diffusers with one of the following methods.

<hfoptions id="install">
<hfoption id="pip">

PyTorch only supports Python 3.8 - 3.11 on Windows.

```bash
uv pip install diffusers["torch"] transformers
```

</hfoption>
<hfoption id="conda">

```bash
conda install -c conda-forge diffusers
```

</hfoption>
<hfoption id="source">

A source install installs the `main` version instead of the latest `stable` version. The `main` version is useful for staying updated with the latest changes but it may not always be stable. If you run into a problem, open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) and we will try to resolve it as soon as possible.

Make sure [Accelerate](https://huggingface.co/docs/accelerate/index) is installed.

```bash
uv pip install accelerate
```

Install Diffusers from source with the command below.

```bash
uv pip install git+https://github.com/huggingface/diffusers
```

</hfoption>
</hfoptions>

## Editable install

An editable install is recommended for development workflows or if you're using the `main` version of the source code. A special link is created between the cloned repository and the Python library paths. This avoids reinstalling a package after every change.

Clone the repository and install Diffusers with the following commands.

```bash
git clone https://github.com/huggingface/diffusers.git
cd diffusers
uv pip install -e ".[torch]"
```

> [!WARNING]
> You must keep the `diffusers` folder if you want to keep using the library with the editable install.

Update your cloned repository to the latest version of Diffusers with the command below.

```bash
cd ~/diffusers/
git pull
```

## Cache

Model weights and files are downloaded from the Hub to a cache, which is usually your home directory. Change the cache location with the [HF_HOME](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhome) or [HF_HUB_CACHE](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhubcache) environment variables or configuring the `cache_dir` parameter in methods like [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

<hfoptions id="cache">
<hfoption id="env variable">

```bash
export HF_HOME="/path/to/your/cache"
export HF_HUB_CACHE="/path/to/your/hub/cache"
```

</hfoption>
<hfoption id="from_pretrained">

```py
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    cache_dir="/path/to/your/cache"
)
```

</hfoption>
</hfoptions>

Cached files allow you to use Diffusers offline. Set the [HF_HUB_OFFLINE](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhuboffline) environment variable to `1` to prevent Diffusers from connecting to the internet.

```shell
export HF_HUB_OFFLINE=1
```

For more details about managing and cleaning the cache, take a look at the [Understand caching](https://huggingface.co/docs/huggingface_hub/guides/manage-cache) guide.

## Telemetry logging

Diffusers gathers telemetry information during [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) requests.
The data gathered includes the Diffusers and PyTorch version, the requested model or pipeline class,
and the path to a pretrained checkpoint if it is hosted on the Hub.

This usage data helps us debug issues and prioritize new features.
Telemetry is only sent when loading models and pipelines from the Hub,
and it is not collected if you're loading local files.

Opt-out and disable telemetry collection with the [HF_HUB_DISABLE_TELEMETRY](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhubdisabletelemetry) environment variable.

<hfoptions id="telemetry">
<hfoption id="Linux/macOS">

```bash
export HF_HUB_DISABLE_TELEMETRY=1
```

</hfoption>
<hfoption id="Windows">

```bash
set HF_HUB_DISABLE_TELEMETRY=1
```

</hfoption>
</hfoptions>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/installation.md" />

### Quickstart
https://huggingface.co/docs/diffusers/main/quicktour.md

# Quickstart

Diffusers is a library for developers and researchers that provides an easy inference API for generating images, videos and audio, as well as the building blocks for implementing new workflows.

Diffusers provides many optimizations out-of-the-box that makes it possible to load and run large models on setups with limited memory or to accelerate inference.

This Quickstart will give you an overview of Diffusers and get you up and generating quickly.

> [!TIP]
> Before you begin, make sure you have a Hugging Face [account](https://huggingface.co/join) in order to use gated models like [Flux](https://huggingface.co/black-forest-labs/FLUX.1-dev).

Follow the [Installation](./installation) guide to install Diffusers if it's not already installed.

## DiffusionPipeline

A diffusion model combines multiple components to generate outputs in any modality based on an input, such as a text description, image or both.

For a standard text-to-image model:

1. A text encoder turns a prompt into embeddings that guide the denoising process. Some models have more than one text encoder.
2. A scheduler contains the algorithmic specifics for gradually denoising initial random noise into clean outputs. Different schedulers affect generation speed and quality.
3. A UNet or diffusion transformer (DiT) is the workhorse of a diffusion model.

  At each step, it performs the denoising predictions, such as how much noise to remove or the general direction in which to steer the noise to generate better quality outputs.

  The UNet or DiT repeats this loop for a set amount of steps to generate the final output.
  
4. A variational autoencoder (VAE) encodes and decodes pixels to a spatially compressed latent-space. *Latents* are compressed representations of an image and are more efficient to work with. The UNet or DiT operates on latents, and the clean latents at the end are decoded back into images.

The [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) packages all these components into a single class for inference. There are several arguments in `__call__()` you can change, such as `num_inference_steps`, that affect the diffusion process. Try different values and arguments to see how they change generation quality or speed.

Load a model with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) and describe what you'd like to generate. The example below uses the default argument values.

<hfoptions id="diffusionpipeline">
<hfoption id="text-to-image">

Use `.images[0]` to access the generated image output.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
```

</hfoption>
<hfoption id="text-to-video">

Use `.frames[0]` to access the generated video output and [export_to_video()](/docs/diffusers/main/en/api/utilities#diffusers.utils.export_to_video) to save the video.

```py
import torch
from diffusers import AutoencoderKLWan, DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.utils import export_to_video

vae = AutoencoderKLWan.from_pretrained(
  "Wan-AI/Wan2.2-T2V-A14B-Diffusers",
  subfolder="vae",
  torch_dtype=torch.float32
)
pipeline = DiffusionPipeline.from_pretrained(
  "Wan-AI/Wan2.2-T2V-A14B-Diffusers",
  vae=vae
  torch_dtype=torch.bfloat16,
  device_map="cuda"
)

prompt = """
Cinematic video of a sleek cat lounging on a colorful inflatable in a crystal-clear turquoise pool in Palm Springs, 
sipping a salt-rimmed margarita through a straw. Golden-hour sunlight glows over mid-century modern homes and swaying palms. 
Shot in rich Sony a7S III: with moody, glamorous color grading, subtle lens flares, and soft vintage film grain. 
Ripples shimmer as a warm desert breeze stirs the water, blending luxury and playful charm in an epic, gorgeously composed frame.
"""
video = pipeline(prompt=prompt, num_frames=81, num_inference_steps=40).frames[0]
export_to_video(video, "output.mp4", fps=16)
```

</hfoption>
</hfoptions>

## LoRA

Adapters insert a small number of trainable parameters to the original base model. Only the inserted parameters are fine-tuned while the rest of the model weights remain frozen. This makes it fast and cheap to fine-tune a model on a new style. Among adapters, [LoRA's](./tutorials/using_peft_for_inference) are the most popular.

Add a LoRA to a pipeline with the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.QwenImageLoraLoaderMixin.load_lora_weights) method. Some LoRA's require a special word to trigger it, such as `Realism`, in the example below. Check a LoRA's model card to see if it requires a trigger word.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)
pipeline.load_lora_weights(
  "flymy-ai/qwen-image-realism-lora",
)

prompt = """
super Realism cinematic film still of a cat sipping a margarita in a pool in Palm Springs in the style of umempart, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
```

Check out the [LoRA](./tutorials/using_peft_for_inference) docs or Adapters section to learn more.

## Quantization

[Quantization](./quantization/overview) stores data in fewer bits to reduce memory usage. It may also speed up inference because it takes less time to perform calculations with fewer bits.

Diffusers provides several quantization backends and picking one depends on your use case. For example, [bitsandbytes](./quantization/bitsandbytes) and [torchao](./quantization/torchao) are both simple and easy to use for inference, but torchao supports more [quantization types](./quantization/torchao#supported-quantization-types) like fp8.

Configure [PipelineQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.PipelineQuantizationConfig) with the backend to use, the specific arguments (refer to the [API](./api/quantization) reference for available arguments) for that backend, and which components to quantize. The example below quantizes the model to 4-bits and only uses 14.93GB of memory.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

quant_config = PipelineQuantizationConfig(
  quant_backend="bitsandbytes_4bit",
  quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
  components_to_quantize=["transformer", "text_encoder"],
)
pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image",
  torch_dtype=torch.bfloat16,
  quantization_config=quant_config,
  device_map="cuda"
)

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

Take a look at the [Quantization](./quantization/overview) section for more details.

## Optimizations

> [!TIP]
> Optimization is dependent on hardware specs such as memory. Use this [Space](https://huggingface.co/spaces/diffusers/optimized-diffusers-code) to generate code examples that include all of Diffusers' available memory and speed optimization techniques for any model you're using.

Modern diffusion models are very large and have billions of parameters. The iterative denoising process is also computationally intensive and slow. Diffusers provides techniques for reducing memory usage and boosting inference speed. These techniques can be combined with quantization to optimize for both memory usage and inference speed.

### Memory usage

The text encoders and UNet or DiT can use up as much as ~30GB of memory, exceeding the amount available on many free-tier or consumer GPUs.

Offloading stores weights that aren't currently used on the CPU and only moves them to the GPU when they're needed. There are a few offloading types and the example below uses [model offloading](./optimization/memory#model-offloading). This moves an entire model, like a text encoder or transformer, to the CPU when it isn't actively being used.

Call [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) to activate it. By combining quantization and offloading, the following example only requires ~12.54GB of memory.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

quant_config = PipelineQuantizationConfig(
  quant_backend="bitsandbytes_4bit",
  quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
  components_to_quantize=["transformer", "text_encoder"],
)
pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image",
  torch_dtype=torch.bfloat16,
  quantization_config=quant_config,
  device_map="cuda"
)
pipeline.enable_model_cpu_offload()

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

Refer to the [Reduce memory usage](./optimization/memory) docs to learn more about other memory reducing techniques.

### Inference speed

The denoising loop performs a lot of computations and can be slow. Methods like [torch.compile](./optimization/fp16#torchcompile) increases inference speed by compiling the computations into an optimized kernel. Compilation is slow for the first generation but successive generations should be much faster.

The example below uses [regional compilation](./optimization/fp16#regional-compilation) to only compile small regions of a model. It reduces cold-start latency while also providing a runtime speed up.

Call [compile_repeated_blocks()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.compile_repeated_blocks) on the model to activate it.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)

pipeline.transformer.compile_repeated_blocks(
    fullgraph=True,
)
prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
```

Check out the [Accelerate inference](./optimization/fp16) or [Caching](./optimization/cache) docs for more methods that speed up inference.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quicktour.md" />

### Community Projects
https://huggingface.co/docs/diffusers/main/community_projects.md

# Community Projects

Welcome to Community Projects. This space is dedicated to showcasing the incredible work and innovative applications created by our vibrant community using the `diffusers` library.

This section aims to:

- Highlight diverse and inspiring projects built with `diffusers`
- Foster knowledge sharing within our community
- Provide real-world examples of how `diffusers` can be leveraged

Happy exploring, and thank you for being part of the Diffusers community!

<table>
    <tr>
        <th>Project Name</th>
        <th>Description</th>
    </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/carson-katri/dream-textures"> dream-textures </a></td>
    <td>Stable Diffusion built-in to Blender</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/megvii-research/HiDiffusion"> HiDiffusion </a></td>
    <td>Increases the resolution and speed of your diffusion model by only adding a single line of code</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/lllyasviel/IC-Light"> IC-Light </a></td>
    <td>IC-Light is a project to manipulate the illumination of images</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/InstantID/InstantID"> InstantID </a></td>
    <td>InstantID : Zero-shot Identity-Preserving Generation in Seconds</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/Sanster/IOPaint"> IOPaint </a></td>
    <td>Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/bmaltais/kohya_ss"> Kohya </a></td>
    <td>Gradio GUI for Kohya's Stable Diffusion trainers</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/magic-research/magic-animate"> MagicAnimate </a></td>
    <td>MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/levihsu/OOTDiffusion"> OOTDiffusion </a></td>
    <td>Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/vladmandic/automatic"> SD.Next </a></td>
    <td>SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/ashawkey/stable-dreamfusion"> stable-dreamfusion </a></td>
    <td>Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/HVision-NKU/StoryDiffusion"> StoryDiffusion </a></td>
    <td>StoryDiffusion can create a magic story by generating consistent images and videos.</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/cumulo-autumn/StreamDiffusion"> StreamDiffusion </a></td>
    <td>A Pipeline-Level Solution for Real-Time Interactive Generation</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/Netwrck/stable-diffusion-server"> Stable Diffusion Server </a></td>
    <td>A server configured for Inpainting/Generation/img2img with one stable diffusion model</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/suzukimain/auto_diffusers"> Model Search </a></td>
    <td>Search models on Civitai and Hugging Face</td>
  </tr>
  <tr style="border-top: 2px solid black">
    <td><a href="https://github.com/beinsezii/skrample"> Skrample </a></td>
    <td>Fully modular scheduler functions with 1st class diffusers integration.</td>
  </tr>
</table>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/community_projects.md" />

### Basic performance
https://huggingface.co/docs/diffusers/main/stable_diffusion.md

# Basic performance

Diffusion is a random process that is computationally demanding. You may need to run the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) several times before getting a desired output. That's why it's important to carefully balance generation speed and memory usage in order to iterate faster,

This guide recommends some basic performance tips for using the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Refer to the Inference Optimization section docs such as [Accelerate inference](./optimization/fp16) or [Reduce memory usage](./optimization/memory) for more detailed performance guides.

## Memory usage

Reducing the amount of memory used indirectly speeds up generation and can help a model fit on device.

The [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) method moves a model to the CPU when it is not in use to save GPU memory.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.bfloat16,
  device_map="cuda"
)
pipeline.enable_model_cpu_offload()

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

## Inference speed

Denoising is the most computationally demanding process during diffusion. Methods that optimizes this process accelerates inference speed. Try the following methods for a speed up.

- Add `device_map="cuda"` to place the pipeline on a GPU. Placing a model on an accelerator, like a GPU, increases speed because it performs computations in parallel.
- Set `torch_dtype=torch.bfloat16` to execute the pipeline in half-precision. Reducing the data type precision increases speed because it takes less time to perform computations in a lower precision.

```py
import torch
import time
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler

pipeline = DiffusionPipeline.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.bfloat16,
  device_map="cuda
)
```

- Use a faster scheduler, such as [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler), which only requires ~20-25 steps.
- Set `num_inference_steps` to a lower value. Reducing the number of inference steps reduces the overall number of computations. However, this can result in lower generation quality.

```py
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""

start_time = time.perf_counter()
image = pipeline(prompt).images[0]
end_time = time.perf_counter()

print(f"Image generation took {end_time - start_time:.3f} seconds")
```

## Generation quality

Many modern diffusion models deliver high-quality images out-of-the-box. However, you can still improve generation quality by trying the following.

- Try a more detailed and descriptive prompt. Include details such as the image medium, subject, style, and aesthetic. A negative prompt may also help by guiding a model away from undesirable features by using words like low quality or blurry.

    ```py
    import torch
    from diffusers import DiffusionPipeline

    pipeline = DiffusionPipeline.from_pretrained(
        "stabilityai/stable-diffusion-xl-base-1.0",
        torch_dtype=torch.bfloat16,
        device_map="cuda"
    )

    prompt = """
    cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
    highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
    """
    negative_prompt = "low quality, blurry, ugly, poor details"
    pipeline(prompt, negative_prompt=negative_prompt).images[0]
    ```

    For more details about creating better prompts, take a look at the [Prompt techniques](./using-diffusers/weighted_prompts) doc.

- Try a different scheduler, like [HeunDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/heun#diffusers.HeunDiscreteScheduler) or [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), that gives up generation speed for quality.

    ```py
    import torch
    from diffusers import DiffusionPipeline, HeunDiscreteScheduler

    pipeline = DiffusionPipeline.from_pretrained(
        "stabilityai/stable-diffusion-xl-base-1.0",
        torch_dtype=torch.bfloat16,
        device_map="cuda"
    )
    pipeline.scheduler = HeunDiscreteScheduler.from_config(pipeline.scheduler.config)

    prompt = """
    cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
    highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
    """
    negative_prompt = "low quality, blurry, ugly, poor details"
    pipeline(prompt, negative_prompt=negative_prompt).images[0]
    ```

## Next steps

Diffusers offers more advanced and powerful optimizations such as [group-offloading](./optimization/memory#group-offloading) and [regional compilation](./optimization/fp16#regional-compilation). To learn more about how to maximize performance, take a look at the Inference Optimization section.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/stable_diffusion.md" />

### Diffusers
https://huggingface.co/docs/diffusers/main/index.md

# Diffusers

Diffusers is a library of state-of-the-art pretrained diffusion models for generating videos, images, and audio.

The library revolves around the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline), an API designed for:

- easy inference with only a few lines of code
- flexibility to mix-and-match pipeline components (models, schedulers)
- loading and using adapters like LoRA

Diffusers also comes with optimizations - such as offloading and quantization - to ensure even the largest models are accessible on memory-constrained devices. If memory is not an issue, Diffusers supports torch.compile to boost inference speed.

Get started right away with a Diffusers model on the [Hub](https://huggingface.co/models?library=diffusers&sort=trending) today!

## Learn

If you're a beginner, we recommend starting with the [Hugging Face Diffusion Models Course](https://huggingface.co/learn/diffusion-course/unit0/1). You'll learn the theory behind diffusion models, and learn how to use the Diffusers library to generate images, fine-tune your own models, and more.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/index.md" />

### Outpainting
https://huggingface.co/docs/diffusers/main/advanced_inference/outpaint.md

# Outpainting

Outpainting extends an image beyond its original boundaries, allowing you to add, replace, or modify visual elements in an image while preserving the original image. Like [inpainting](../using-diffusers/inpaint), you want to fill the white area (in this case, the area outside of the original image) with new visual elements while keeping the original image (represented by a mask of black pixels). There are a couple of ways to outpaint, such as with a [ControlNet](https://hf.co/blog/OzzyGT/outpainting-controlnet) or with [Differential Diffusion](https://hf.co/blog/OzzyGT/outpainting-differential-diffusion).

This guide will show you how to outpaint with an inpainting model, ControlNet, and a ZoeDepth estimator.

Before you begin, make sure you have the [controlnet_aux](https://github.com/huggingface/controlnet_aux) library installed so you can use the ZoeDepth estimator.

```py
!pip install -q controlnet_aux
```

## Image preparation

Start by picking an image to outpaint with and remove the background with a Space like [BRIA-RMBG-1.4](https://hf.co/spaces/briaai/BRIA-RMBG-1.4).

<iframe
	src="https://briaai-bria-rmbg-1-4.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

For example, remove the background from this image of a pair of shoes.

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/original-jordan.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/no-background-jordan.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">background removed</figcaption>
  </div>
</div>

[Stable Diffusion XL (SDXL)](../using-diffusers/sdxl) models work best with 1024x1024 images, but you can resize the image to any size as long as your hardware has enough memory to support it. The transparent background in the image should also be replaced with a white background. Create a function (like the one below) that scales and pastes the image onto a white background.

```py
import random

import requests
import torch
from controlnet_aux import ZoeDetector
from PIL import Image, ImageOps

from diffusers import (
    AutoencoderKL,
    ControlNetModel,
    StableDiffusionXLControlNetPipeline,
    StableDiffusionXLInpaintPipeline,
)

def scale_and_paste(original_image):
    aspect_ratio = original_image.width / original_image.height

    if original_image.width > original_image.height:
        new_width = 1024
        new_height = round(new_width / aspect_ratio)
    else:
        new_height = 1024
        new_width = round(new_height * aspect_ratio)

    resized_original = original_image.resize((new_width, new_height), Image.LANCZOS)
    white_background = Image.new("RGBA", (1024, 1024), "white")
    x = (1024 - new_width) // 2
    y = (1024 - new_height) // 2
    white_background.paste(resized_original, (x, y), resized_original)

    return resized_original, white_background

original_image = Image.open(
    requests.get(
        "https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/no-background-jordan.png",
        stream=True,
    ).raw
).convert("RGBA")
resized_img, white_bg_image = scale_and_paste(original_image)
```

To avoid adding unwanted extra details, use the ZoeDepth estimator to provide additional guidance during generation and to ensure the shoes remain consistent with the original image.

```py
zoe = ZoeDetector.from_pretrained("lllyasviel/Annotators")
image_zoe = zoe(white_bg_image, detect_resolution=512, image_resolution=1024)
image_zoe
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/zoedepth-jordan.png"/>
</div>

## Outpaint

Once your image is ready, you can generate content in the white area around the shoes with [controlnet-inpaint-dreamer-sdxl](https://hf.co/destitech/controlnet-inpaint-dreamer-sdxl), a SDXL ControlNet trained for inpainting.

Load the inpainting ControlNet, ZoeDepth model, VAE and pass them to the [StableDiffusionXLControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetPipeline). Then you can create an optional `generate_image` function (for convenience) to outpaint an initial image.

```py
controlnets = [
    ControlNetModel.from_pretrained(
        "destitech/controlnet-inpaint-dreamer-sdxl", torch_dtype=torch.float16, variant="fp16"
    ),
    ControlNetModel.from_pretrained(
        "diffusers/controlnet-zoe-depth-sdxl-1.0", torch_dtype=torch.float16
    ),
]
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda")
pipeline = StableDiffusionXLControlNetPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16", controlnet=controlnets, vae=vae
).to("cuda")

def generate_image(prompt, negative_prompt, inpaint_image, zoe_image, seed: int = None):
    if seed is None:
        seed = random.randint(0, 2**32 - 1)

    generator = torch.Generator(device="cpu").manual_seed(seed)

    image = pipeline(
        prompt,
        negative_prompt=negative_prompt,
        image=[inpaint_image, zoe_image],
        guidance_scale=6.5,
        num_inference_steps=25,
        generator=generator,
        controlnet_conditioning_scale=[0.5, 0.8],
        control_guidance_end=[0.9, 0.6],
    ).images[0]

    return image

prompt = "nike air jordans on a basketball court"
negative_prompt = ""

temp_image = generate_image(prompt, negative_prompt, white_bg_image, image_zoe, 908097)
```

Paste the original image over the initial outpainted image. You'll improve the outpainted background in a later step.

```py
x = (1024 - resized_img.width) // 2
y = (1024 - resized_img.height) // 2
temp_image.paste(resized_img, (x, y), resized_img)
temp_image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/initial-outpaint.png"/>
</div>

> [!TIP]
> Now is a good time to free up some memory if you're running low!
>
> ```py
> pipeline=None
> torch.cuda.empty_cache()
> ```

Now that you have an initial outpainted image, load the [StableDiffusionXLInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline) with the [RealVisXL](https://hf.co/SG161222/RealVisXL_V4.0) model to generate the final outpainted image with better quality.

```py
pipeline = StableDiffusionXLInpaintPipeline.from_pretrained(
    "OzzyGT/RealVisXL_V4.0_inpainting",
    torch_dtype=torch.float16,
    variant="fp16",
    vae=vae,
).to("cuda")
```

Prepare a mask for the final outpainted image. To create a more natural transition between the original image and the outpainted background, blur the mask to help it blend better.

```py
mask = Image.new("L", temp_image.size)
mask.paste(resized_img.split()[3], (x, y))
mask = ImageOps.invert(mask)
final_mask = mask.point(lambda p: p > 128 and 255)
mask_blurred = pipeline.mask_processor.blur(final_mask, blur_factor=20)
mask_blurred
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/blurred-mask.png"/>
</div>

Create a better prompt and pass it to the `generate_outpaint` function to generate the final outpainted image. Again, paste the original image over the final outpainted background.

```py
def generate_outpaint(prompt, negative_prompt, image, mask, seed: int = None):
    if seed is None:
        seed = random.randint(0, 2**32 - 1)

    generator = torch.Generator(device="cpu").manual_seed(seed)

    image = pipeline(
        prompt,
        negative_prompt=negative_prompt,
        image=image,
        mask_image=mask,
        guidance_scale=10.0,
        strength=0.8,
        num_inference_steps=30,
        generator=generator,
    ).images[0]

    return image

prompt = "high quality photo of nike air jordans on a basketball court, highly detailed"
negative_prompt = ""

final_image = generate_outpaint(prompt, negative_prompt, temp_image, mask_blurred, 7688778)
x = (1024 - resized_img.width) // 2
y = (1024 - resized_img.height) // 2
final_image.paste(resized_img, (x, y), resized_img)
final_image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/final-outpaint.png"/>
</div>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/advanced_inference/outpaint.md" />

### Overview
https://huggingface.co/docs/diffusers/main/modular_diffusers/overview.md

# Overview

> [!WARNING]
> Modular Diffusers is under active development and it's API may change.

Modular Diffusers is a unified pipeline system that simplifies your workflow with *pipeline blocks*.

- Blocks are reusable and you only need to create new blocks that are unique to your pipeline.
- Blocks can be mixed and matched to adapt to or create a pipeline for a specific workflow or multiple workflows.

The Modular Diffusers docs are organized as shown below.

## Quickstart

- A [quickstart](./quickstart) demonstrating how to implement an example workflow with Modular Diffusers.

## ModularPipelineBlocks

- [States](./modular_diffusers_states) explains how data is shared and communicated between blocks and [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline).
- [ModularPipelineBlocks](./pipeline_block) is the most basic unit of a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) and this guide shows you how to create one.
- [SequentialPipelineBlocks](./sequential_pipeline_blocks) is a type of block that chains multiple blocks so they run one after another, passing data along the chain. This guide shows you how to create [SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks) and how they connect and work together.
- [LoopSequentialPipelineBlocks](./loop_sequential_pipeline_blocks) is a type of block that runs a series of blocks in a loop. This guide shows you how to create [LoopSequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks).
- [AutoPipelineBlocks](./auto_pipeline_blocks) is a type of block that automatically chooses which blocks to run based on the input. This guide shows you how to create [AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks).

## ModularPipeline

- [ModularPipeline](./modular_pipeline) shows you how to create and convert pipeline blocks into an executable [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline).
- [ComponentsManager](./components_manager) shows you how to manage and reuse components across multiple pipelines.
- [Guiders](./guiders) shows you how to use different guidance methods in the pipeline.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/overview.md" />

### ModularPipeline
https://huggingface.co/docs/diffusers/main/modular_diffusers/modular_pipeline.md

# ModularPipeline

[ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) converts [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks)'s into an executable pipeline that loads models and performs the computation steps defined in the block. It is the main interface for running a pipeline and it is very similar to the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) API.

The main difference is to include an expected `output` argument in the pipeline.

<hfoptions id="example">
<hfoption id="text-to-image">

```py
import torch
from diffusers.modular_pipelines import SequentialPipelineBlocks
from diffusers.modular_pipelines.stable_diffusion_xl import TEXT2IMAGE_BLOCKS

blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2IMAGE_BLOCKS)

modular_repo_id = "YiYiXu/modular-loader-t2i-0704"
pipeline = blocks.init_pipeline(modular_repo_id)

pipeline.load_components(torch_dtype=torch.float16)
pipeline.to("cuda")

image = pipeline(prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", output="images")[0]
image.save("modular_t2i_out.png")
```

</hfoption>
<hfoption id="image-to-image">

```py
import torch
from diffusers.modular_pipelines import SequentialPipelineBlocks
from diffusers.modular_pipelines.stable_diffusion_xl import IMAGE2IMAGE_BLOCKS

blocks = SequentialPipelineBlocks.from_blocks_dict(IMAGE2IMAGE_BLOCKS)

modular_repo_id = "YiYiXu/modular-loader-t2i-0704"
pipeline = blocks.init_pipeline(modular_repo_id)

pipeline.load_components(torch_dtype=torch.float16)
pipeline.to("cuda")

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
init_image = load_image(url)
prompt = "a dog catching a frisbee in the jungle"
image = pipeline(prompt=prompt, image=init_image, strength=0.8, output="images")[0]
image.save("modular_i2i_out.png")
```

</hfoption>
<hfoption id="inpainting">

```py
import torch
from diffusers.modular_pipelines import SequentialPipelineBlocks
from diffusers.modular_pipelines.stable_diffusion_xl import INPAINT_BLOCKS
from diffusers.utils import load_image

blocks = SequentialPipelineBlocks.from_blocks_dict(INPAINT_BLOCKS)

modular_repo_id = "YiYiXu/modular-loader-t2i-0704"
pipeline = blocks.init_pipeline(modular_repo_id)

pipeline.load_components(torch_dtype=torch.float16)
pipeline.to("cuda")

img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png"

init_image = load_image(img_url)
mask_image = load_image(mask_url)

prompt = "A deep sea diver floating"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, output="images")[0]
image.save("moduar_inpaint_out.png")
```

</hfoption>
</hfoptions>

This guide will show you how to create a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) and manage the components in it.

## Adding blocks

Blocks are `InsertableDict` objects that can be inserted at specific positions, providing a flexible way to mix-and-match blocks.

Use `insert()` on either the block class or `sub_blocks` attribute to add a block.

```py
# BLOCKS is dict of block classes, you need to add class to it
BLOCKS.insert("block_name", BlockClass, index)
# sub_blocks attribute contains instance, add a block instance to the  attribute
t2i_blocks.sub_blocks.insert("block_name", block_instance, index)
```

Use `pop()` on either the block class or `sub_blocks` attribute to remove a block.

```py
# remove a block class from preset
BLOCKS.pop("text_encoder")
# split out a block instance on its own
text_encoder_block = t2i_blocks.sub_blocks.pop("text_encoder")
```

Swap blocks by setting the existing block to the new block.

```py
# Replace block class in preset
BLOCKS["prepare_latents"] = CustomPrepareLatents
# Replace in sub_blocks attribute using an block instance
t2i_blocks.sub_blocks["prepare_latents"] = CustomPrepareLatents()
```

## Creating a pipeline

There are two ways to create a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline). Assemble and create a pipeline from [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) or load an existing pipeline with [from_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.from_pretrained).

You should also initialize a [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) to handle device placement and memory and component management.

> [!TIP]
> Refer to the [ComponentsManager](./components_manager) doc for more details about how it can help manage components across different workflows.

<hfoptions id="create">
<hfoption id="ModularPipelineBlocks">

Use the [init_pipeline()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks.init_pipeline) method to create a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) from the component and configuration specifications. This method loads the *specifications* from a `modular_model_index.json` file, but it doesn't load the *models* yet.

```py
from diffusers import ComponentsManager
from diffusers.modular_pipelines import SequentialPipelineBlocks
from diffusers.modular_pipelines.stable_diffusion_xl import TEXT2IMAGE_BLOCKS

t2i_blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2IMAGE_BLOCKS)

modular_repo_id = "YiYiXu/modular-loader-t2i-0704"
components = ComponentsManager()
t2i_pipeline = t2i_blocks.init_pipeline(modular_repo_id, components_manager=components)
```

</hfoption>
<hfoption id="from_pretrained">

The [from_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.from_pretrained) method creates a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) from a modular repository on the Hub.

```py
from diffusers import ModularPipeline, ComponentsManager

components = ComponentsManager()
pipeline = ModularPipeline.from_pretrained("YiYiXu/modular-loader-t2i-0704", components_manager=components)
```

Add the `trust_remote_code` argument to load a custom [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline).

```py
from diffusers import ModularPipeline, ComponentsManager

components = ComponentsManager()
modular_repo_id = "YiYiXu/modular-diffdiff-0704"
diffdiff_pipeline = ModularPipeline.from_pretrained(modular_repo_id, trust_remote_code=True, components_manager=components)
```

</hfoption>
</hfoptions>

## Loading components

A [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) doesn't automatically instantiate with components. It only loads the configuration and component specifications. You can load all components with [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) or only load specific components with [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components).

<hfoptions id="load">
<hfoption id="load_components">

```py
import torch

t2i_pipeline.load_components(torch_dtype=torch.float16)
t2i_pipeline.to("cuda")
```

</hfoption>
<hfoption id="load_components">

The example below only loads the UNet and VAE.

```py
import torch

t2i_pipeline.load_components(names=["unet", "vae"], torch_dtype=torch.float16)
```

</hfoption>
</hfoptions>

Print the pipeline to inspect the loaded pretrained components.

```py
t2i_pipeline
```

This should match the `modular_model_index.json` file from the modular repository a pipeline is initialized from. If a pipeline doesn't need a component, it won't be included even if it exists in the modular repository.

To modify where components are loaded from, edit the `modular_model_index.json` file in the repository and change it to your desired loading path. The example below loads a UNet from a different repository.

```json
# original
"unet": [
  null, null,
  {
    "repo": "stabilityai/stable-diffusion-xl-base-1.0",
    "subfolder": "unet",
    "variant": "fp16"
  }
]

# modified
"unet": [
  null, null,
  {
    "repo": "RunDiffusion/Juggernaut-XL-v9",
    "subfolder": "unet",
    "variant": "fp16"
  }
]
```

### Component loading status

The pipeline properties below provide more information about which components are loaded.

Use `component_names` to return all expected components.

```py
t2i_pipeline.component_names
['text_encoder', 'text_encoder_2', 'tokenizer', 'tokenizer_2', 'guider', 'scheduler', 'unet', 'vae', 'image_processor']
```

Use `null_component_names` to return components that aren't loaded yet. Load these components with [from_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.from_pretrained).

```py
t2i_pipeline.null_component_names
['text_encoder', 'text_encoder_2', 'tokenizer', 'tokenizer_2', 'scheduler']
```

Use `pretrained_component_names` to return components that will be loaded from pretrained models.

```py
t2i_pipeline.pretrained_component_names
['text_encoder', 'text_encoder_2', 'tokenizer', 'tokenizer_2', 'scheduler', 'unet', 'vae']
```

Use `config_component_names` to return components that are created with the default config (not loaded from a modular repository). Components from a config aren't included because they are already initialized during pipeline creation. This is why they aren't listed in `null_component_names`.

```py
t2i_pipeline.config_component_names
['guider', 'image_processor']
```

## Updating components

Components may be updated depending on whether it is a *pretrained component* or a *config component*.

> [!WARNING]
> A component may change from pretrained to config when updating a component. The component type is initially defined in a block's `expected_components` field.

A pretrained component is updated with [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) whereas a config component is updated by eihter passing the object directly or with [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec).

The [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) shows `default_creation_method="from_pretrained"` for a pretrained component shows `default_creation_method="from_config` for a config component.

To update a pretrained component, create a [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) with the name of the component and where to load it from. Use the [load()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec.load) method to load the component.

```py
from diffusers import ComponentSpec, UNet2DConditionModel

unet_spec = ComponentSpec(name="unet",type_hint=UNet2DConditionModel, repo="stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", variant="fp16")
unet = unet_spec.load(torch_dtype=torch.float16)
```

The [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components) method replaces the component with a new one.

```py
t2i_pipeline.update_components(unet=unet2)
```

When a component is updated, the loading specifications are also updated in the pipeline config.

### Component extraction and modification

When you use [load()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec.load), the new component maintains its loading specifications. This makes it possible to extract the specification and recreate the component.

```py
spec = ComponentSpec.from_component("unet", unet2)
spec
ComponentSpec(name='unet', type_hint=<class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>, description=None, config=None, repo='stabilityai/stable-diffusion-xl-base-1.0', subfolder='unet', variant='fp16', revision=None, default_creation_method='from_pretrained')
unet2_recreated = spec.load(torch_dtype=torch.float16)
```

The [get_component_spec()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.get_component_spec) method gets a copy of the current component specification to modify or update.

```py
unet_spec = t2i_pipeline.get_component_spec("unet")
unet_spec
ComponentSpec(
    name='unet',
    type_hint=<class 'diffusers.models.unets.unet_2d_condition.UNet2DConditionModel'>,
    repo='RunDiffusion/Juggernaut-XL-v9',
    subfolder='unet',
    variant='fp16',
    default_creation_method='from_pretrained'
)

# modify to load from a different repository
unet_spec.repo = "stabilityai/stable-diffusion-xl-base-1.0"

# load component with modified spec
unet = unet_spec.load(torch_dtype=torch.float16)
```

## Modular repository

A repository is required if the pipeline blocks use *pretrained components*. The repository supplies loading specifications and metadata.

[ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) specifically requires *modular repositories* (see [example repository](https://huggingface.co/YiYiXu/modular-diffdiff)) which are more flexible than a typical repository. It contains a `modular_model_index.json` file containing the following 3 elements.

- `library` and `class` shows which library the component was loaded from and it's class. If `null`, the component hasn't been loaded yet.
- `loading_specs_dict` contains the information required to load the component such as the repository and subfolder it is loaded from.

Unlike standard repositories, a modular repository can fetch components from different repositories based on the `loading_specs_dict`. Components don't need to exist in the same repository.

A modular repository may contain custom code for loading a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline). This allows you to use specialized blocks that aren't native to Diffusers.

```
modular-diffdiff-0704/
├── block.py                    # Custom pipeline blocks implementation
├── config.json                 # Pipeline configuration and auto_map
└── modular_model_index.json    # Component loading specifications
```

The [config.json](https://huggingface.co/YiYiXu/modular-diffdiff-0704/blob/main/config.json) file contains an `auto_map` key that points to where a custom block is defined in `block.py`.

```json
{
  "_class_name": "DiffDiffBlocks",
  "auto_map": {
    "ModularPipelineBlocks": "block.DiffDiffBlocks"
  }
}
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/modular_pipeline.md" />

### ModularPipelineBlocks
https://huggingface.co/docs/diffusers/main/modular_diffusers/pipeline_block.md

# ModularPipelineBlocks

[ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) is the basic block for building a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline). It defines what components, inputs/outputs, and computation a block should perform for a specific step in a pipeline. A [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) connects with other blocks, using [state](./modular_diffusers_states), to enable the modular construction of workflows.

A [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) on it's own can't be executed. It is a blueprint for what a step should do in a pipeline. To actually run and execute a pipeline, the [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) needs to be converted into a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline).

This guide will show you how to create a [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks).

## Inputs and outputs

> [!TIP]
> Refer to the [States](./modular_diffusers_states) guide if you aren't familiar with how state works in Modular Diffusers.

A [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) requires `inputs`, and `intermediate_outputs`.

- `inputs` are values provided by a user and retrieved from the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState). This is useful because some workflows resize an image, but the original image is still required. The [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) maintains the original image.

    Use `InputParam` to define `inputs`.

    ```py
    from diffusers.modular_pipelines import InputParam

    user_inputs = [
        InputParam(name="image", type_hint="PIL.Image", description="raw input image to process")
    ]
    ```

- `intermediate_inputs` are values typically created from a previous block but it can also be directly provided if no preceding block generates them. Unlike `inputs`, `intermediate_inputs` can be modified.

    Use `InputParam` to define `intermediate_inputs`.

    ```py
    user_intermediate_inputs = [
        InputParam(name="processed_image", type_hint="torch.Tensor", description="image that has been preprocessed and normalized"),
    ]
    ```

- `intermediate_outputs` are new values created by a block and added to the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState). The `intermediate_outputs` are available as `intermediate_inputs` for subsequent blocks or available as the final output from running the pipeline.

    Use `OutputParam` to define `intermediate_outputs`.

    ```py
    from diffusers.modular_pipelines import OutputParam

        user_intermediate_outputs = [
        OutputParam(name="image_latents", description="latents representing the image")
    ]
    ```

The intermediate inputs and outputs share data to connect blocks. They are accessible at any point, allowing you to track the workflow's progress.

## Computation logic

The computation a block performs is defined in the `__call__` method and it follows a specific structure.

1. Retrieve the [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) to get a local view of the `inputs` and `intermediate_inputs`.
2. Implement the computation logic on the `inputs` and `intermediate_inputs`.
3. Update [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) to push changes from the local [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) back to the global [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState).
4. Return the components and state which becomes available to the next block.

```py
def __call__(self, components, state):
    # Get a local view of the state variables this block needs
    block_state = self.get_block_state(state)

    # Your computation logic here
    # block_state contains all your inputs and intermediate_inputs
    # Access them like: block_state.image, block_state.processed_image

    # Update the pipeline state with your updated block_states
    self.set_block_state(state, block_state)
    return components, state
```

### Components and configs

The components and pipeline-level configs a block needs are specified in [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) and [ConfigSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.modular_pipelines.ConfigSpec).

- [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) contains the expected components used by a block. You need the `name` of the component and ideally a `type_hint` that specifies exactly what the component is.
- [ConfigSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.modular_pipelines.ConfigSpec) contains pipeline-level settings that control behavior across all blocks.

```py
from diffusers import ComponentSpec, ConfigSpec

expected_components = [
    ComponentSpec(name="unet", type_hint=UNet2DConditionModel),
    ComponentSpec(name="scheduler", type_hint=EulerDiscreteScheduler)
]

expected_config = [
    ConfigSpec("force_zeros_for_empty_prompt", True)
]
```

When the blocks are converted into a pipeline, the components become available to the block as the first argument in `__call__`.

```py
def __call__(self, components, state):
    # Access components using dot notation
    unet = components.unet
    vae = components.vae
    scheduler = components.scheduler
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/pipeline_block.md" />

### AutoPipelineBlocks
https://huggingface.co/docs/diffusers/main/modular_diffusers/auto_pipeline_blocks.md

# AutoPipelineBlocks

[AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks) are a multi-block type containing blocks that support different workflows. It automatically selects which sub-blocks to run based on the input provided at runtime. This is typically used to package multiple workflows - text-to-image, image-to-image, inpaint - into a single pipeline for convenience.

This guide shows how to create [AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks).

Create three [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) for text-to-image, image-to-image, and inpainting. These represent the different workflows available in the pipeline.

<hfoptions id="auto">
<hfoption id="text-to-image">

```py
import torch
from diffusers.modular_pipelines import ModularPipelineBlocks, InputParam, OutputParam

class TextToImageBlock(ModularPipelineBlocks):
    model_name = "text2img"

    @property
    def inputs(self):
        return [InputParam(name="prompt")]

    @property
    def intermediate_outputs(self):
        return []

    @property
    def description(self):
        return "I'm a text-to-image workflow!"

    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        print("running the text-to-image workflow")
        # Add your text-to-image logic here
        # For example: generate image from prompt
        self.set_block_state(state, block_state)
        return components, state
```


</hfoption>
<hfoption id="image-to-image">

```py
class ImageToImageBlock(ModularPipelineBlocks):
    model_name = "img2img"

    @property
    def inputs(self):
        return [InputParam(name="prompt"), InputParam(name="image")]

    @property
    def intermediate_outputs(self):
        return []

    @property
    def description(self):
        return "I'm an image-to-image workflow!"

    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        print("running the image-to-image workflow")
        # Add your image-to-image logic here
        # For example: transform input image based on prompt
        self.set_block_state(state, block_state)
        return components, state
```


</hfoption>
<hfoption id="inpaint">

```py
class InpaintBlock(ModularPipelineBlocks):
    model_name = "inpaint"

    @property
    def inputs(self):
        return [InputParam(name="prompt"), InputParam(name="image"), InputParam(name="mask")]

    @property
    def intermediate_outputs(self):
        return []

    @property
    def description(self):
        return "I'm an inpaint workflow!"

    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        print("running the inpaint workflow")
        # Add your inpainting logic here
        # For example: fill masked areas based on prompt
        self.set_block_state(state, block_state)
        return components, state
```

</hfoption>
</hfoptions>

Create an [AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks) class that includes a list of the sub-block classes and their corresponding block names.

You also need to include `block_trigger_inputs`, a list of input names that trigger the corresponding block. If a trigger input is provided at runtime, then that block is selected to run. Use `None` to specify the default block to run if no trigger inputs are detected.

Lastly, it is important to include a `description` that clearly explains which inputs trigger which workflow. This helps users understand how to run specific workflows.

```py
from diffusers.modular_pipelines import AutoPipelineBlocks

class AutoImageBlocks(AutoPipelineBlocks):
    # List of sub-block classes to choose from
    block_classes = [block_inpaint_cls, block_i2i_cls, block_t2i_cls]
    # Names for each block in the same order
    block_names = ["inpaint", "img2img", "text2img"]
    # Trigger inputs that determine which block to run
    # - "mask" triggers inpaint workflow
    # - "image" triggers img2img workflow (but only if mask is not provided)
    # - if none of above, runs the text2img workflow (default)
    block_trigger_inputs = ["mask", "image", None]
    # Description is extremely important for AutoPipelineBlocks

    def description(self):
        return (
            "Pipeline generates images given different types of conditions!\n"
            + "This is an auto pipeline block that works for text2img, img2img and inpainting tasks.\n"
            + " - inpaint workflow is run when `mask` is provided.\n"
            + " - img2img workflow is run when `image` is provided (but only when `mask` is not provided).\n"
            + " - text2img workflow is run when neither `image` nor `mask` is provided.\n"
        )
```

It is **very** important to include a `description` to avoid any confusion over how to run a block and what inputs are required. While [AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks) are convenient, it's conditional logic may be difficult to figure out if it isn't properly explained.

Create an instance of `AutoImageBlocks`.

```py
auto_blocks = AutoImageBlocks()
```

For more complex compositions, such as nested [AutoPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.AutoPipelineBlocks) blocks when they're used as sub-blocks in larger pipelines, use the `get_execution_blocks()` method to extract the a block that is actually run based on your input.

```py
auto_blocks.get_execution_blocks("mask")
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/auto_pipeline_blocks.md" />

### SequentialPipelineBlocks
https://huggingface.co/docs/diffusers/main/modular_diffusers/sequential_pipeline_blocks.md

# SequentialPipelineBlocks

[SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks) are a multi-block type that composes other [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) together in a sequence. Data flows linearly from one block to the next using `intermediate_inputs` and `intermediate_outputs`. Each block in [SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks) usually represents a step in the pipeline, and by combining them, you gradually build a pipeline.

This guide shows you how to connect two blocks into a [SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks).

Create two [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks). The first block, `InputBlock`, outputs a `batch_size` value and the second block, `ImageEncoderBlock` uses `batch_size` as `intermediate_inputs`.

<hfoptions id="sequential">
<hfoption id="InputBlock">

```py
from diffusers.modular_pipelines import ModularPipelineBlocks, InputParam, OutputParam

class InputBlock(ModularPipelineBlocks):

    @property
    def inputs(self):
        return [
            InputParam(name="prompt", type_hint=list, description="list of text prompts"),
            InputParam(name="num_images_per_prompt", type_hint=int, description="number of images per prompt"),
        ]

    @property
    def intermediate_outputs(self):
        return [
            OutputParam(name="batch_size", description="calculated batch size"),
        ]

    @property
    def description(self):
        return "A block that determines batch_size based on the number of prompts and num_images_per_prompt argument."

    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        batch_size = len(block_state.prompt)
        block_state.batch_size = batch_size * block_state.num_images_per_prompt
        self.set_block_state(state, block_state)
        return components, state
```

</hfoption>
<hfoption id="ImageEncoderBlock">

```py
import torch
from diffusers.modular_pipelines import ModularPipelineBlocks, InputParam, OutputParam

class ImageEncoderBlock(ModularPipelineBlocks):

    @property
    def inputs(self):
        return [
            InputParam(name="image", type_hint="PIL.Image", description="raw input image to process"),
            InputParam(name="batch_size", type_hint=int),
        ]

    @property
    def intermediate_outputs(self):
        return [
            OutputParam(name="image_latents", description="latents representing the image"),
        ]

    @property
    def description(self):
        return "Encode raw image into its latent presentation"

    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        # Simulate processing the image
        # This will change the state of the image from a PIL image to a tensor for all blocks
        block_state.image = torch.randn(1, 3, 512, 512)
        block_state.batch_size = block_state.batch_size * 2
        block_state.image_latents = torch.randn(1, 4, 64, 64)
        self.set_block_state(state, block_state)
        return components, state
```

</hfoption>
</hfoptions>

Connect the two blocks by defining an `InsertableDict` to map the block names to the block instances. Blocks are executed in the order they're registered in `blocks_dict`.

Use [from_blocks_dict()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks.from_blocks_dict) to create a [SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks).

```py
from diffusers.modular_pipelines import SequentialPipelineBlocks, InsertableDict

blocks_dict = InsertableDict()
blocks_dict["input"] = input_block
blocks_dict["image_encoder"] = image_encoder_block

blocks = SequentialPipelineBlocks.from_blocks_dict(blocks_dict)
```

Inspect the sub-blocks in [SequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.SequentialPipelineBlocks) by calling `blocks`, and for more details about the inputs and outputs, access the `docs` attribute.

```py
print(blocks)
print(blocks.doc)
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/sequential_pipeline_blocks.md" />

### LoopSequentialPipelineBlocks
https://huggingface.co/docs/diffusers/main/modular_diffusers/loop_sequential_pipeline_blocks.md

# LoopSequentialPipelineBlocks

[LoopSequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks) are a multi-block type that composes other [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) together in a loop. Data flows circularly, using `intermediate_inputs` and `intermediate_outputs`, and each block is run iteratively. This is typically used to create a denoising loop which is iterative by default.

This guide shows you how to create [LoopSequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks).

## Loop wrapper

[LoopSequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks), is also known as the *loop wrapper* because it defines the loop structure, iteration variables, and configuration. Within the loop wrapper, you need the following variables.

- `loop_inputs` are user provided values and equivalent to `inputs`.
- `loop_intermediate_inputs` are intermediate variables from the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) and equivalent to `~modular_pipelines.ModularPipelineBlocks.intermediate_inputs`.
- `loop_intermediate_outputs` are new intermediate variables created by the block and added to the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState). It is equivalent to `intermediate_outputs`.
- `__call__` method defines the loop structure and iteration logic.

```py
import torch
from diffusers.modular_pipelines import LoopSequentialPipelineBlocks, ModularPipelineBlocks, InputParam, OutputParam

class LoopWrapper(LoopSequentialPipelineBlocks):
    model_name = "test"
    @property
    def description(self):
        return "I'm a loop!!"
    @property
    def loop_inputs(self):
        return [InputParam(name="num_steps")]
    @torch.no_grad()
    def __call__(self, components, state):
        block_state = self.get_block_state(state)
        # Loop structure - can be customized to your needs
        for i in range(block_state.num_steps):
            # loop_step executes all registered blocks in sequence
            components, block_state = self.loop_step(components, block_state, i=i)
        self.set_block_state(state, block_state)
        return components, state
```

The loop wrapper can pass additional arguments, like current iteration index, to the loop blocks.

## Loop blocks

A loop block is a [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks), but the `__call__` method behaves differently.

- It recieves the iteration variable from the loop wrapper.
- It works directly with the [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) instead of the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState).
- It doesn't require retrieving or updating the [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState).

Loop blocks share the same [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) to allow values to accumulate and change for each iteration in the loop.

```py
class LoopBlock(ModularPipelineBlocks):
    model_name = "test"
    @property
    def inputs(self):
        return [InputParam(name="x")]
    @property
    def intermediate_outputs(self):
        # outputs produced by this block
        return [OutputParam(name="x")]
    @property
    def description(self):
        return "I'm a block used inside the `LoopWrapper` class"
    def __call__(self, components, block_state, i: int):
        block_state.x += 1
        return components, block_state
```

## LoopSequentialPipelineBlocks

Use the [from_blocks_dict()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks.from_blocks_dict) method to add the loop block to the loop wrapper to create [LoopSequentialPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks).

```py
loop = LoopWrapper.from_blocks_dict({"block1": LoopBlock})
```

Add more loop blocks to run within each iteration with [from_blocks_dict()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.modular_pipelines.LoopSequentialPipelineBlocks.from_blocks_dict). This allows you to modify the blocks without changing the loop logic itself.

```py
loop = LoopWrapper.from_blocks_dict({"block1": LoopBlock(), "block2": LoopBlock})
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/loop_sequential_pipeline_blocks.md" />

### ComponentsManager
https://huggingface.co/docs/diffusers/main/modular_diffusers/components_manager.md

# ComponentsManager

The [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) is a model registry and management system for Modular Diffusers. It adds and tracks models, stores useful metadata (model size, device placement, adapters), prevents duplicate model instances, and supports offloading.

This guide will show you how to use [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) to manage components and device memory.

## Add a component

The [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) should be created alongside a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) in either [from_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.from_pretrained) or [init_pipeline()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks.init_pipeline).

> [!TIP]
> The `collection` parameter is optional but makes it easier to organize and manage components.

<hfoptions id="create">
<hfoption id="from_pretrained">

```py
from diffusers import ModularPipeline, ComponentsManager

comp = ComponentsManager()
pipe = ModularPipeline.from_pretrained("YiYiXu/modular-demo-auto", components_manager=comp, collection="test1")
```

</hfoption>
<hfoption id="init_pipeline">

```py
from diffusers import ComponentsManager
from diffusers.modular_pipelines import SequentialPipelineBlocks
from diffusers.modular_pipelines.stable_diffusion_xl import TEXT2IMAGE_BLOCKS

t2i_blocks = SequentialPipelineBlocks.from_blocks_dict(TEXT2IMAGE_BLOCKS)

modular_repo_id = "YiYiXu/modular-loader-t2i-0704"
components = ComponentsManager()
t2i_pipeline = t2i_blocks.init_pipeline(modular_repo_id, components_manager=components)
```

</hfoption>
</hfoptions>

Components are only loaded and registered when using [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) or [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components). The example below uses [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) to create a second pipeline that reuses all the components from the first one, and assigns it to a different collection

```py
pipe.load_components()
pipe2 = ModularPipeline.from_pretrained("YiYiXu/modular-demo-auto", components_manager=comp, collection="test2")
```

Use the `null_component_names` property to identify any components that need to be loaded, retrieve them with [get_components_by_names()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.get_components_by_names), and then call [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components) to add the missing components.

```py
pipe2.null_component_names 
['text_encoder', 'text_encoder_2', 'tokenizer', 'tokenizer_2', 'image_encoder', 'unet', 'vae', 'scheduler', 'controlnet']

comp_dict = comp.get_components_by_names(names=pipe2.null_component_names)
pipe2.update_components(**comp_dict)
```

To add individual components, use the [add()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.add) method. This registers a component with a unique id.

```py
from diffusers import AutoModel

text_encoder = AutoModel.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="text_encoder")
component_id = comp.add("text_encoder", text_encoder)
comp
```

Use [remove()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.remove) to remove a component using their id.

```py
comp.remove("text_encoder_139917733042864")
```

## Retrieve a component

The [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) provides several methods to retrieve registered components.

### get_one

The [get_one()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.get_one) method returns a single component and supports pattern matching for the `name` parameter. If multiple components match, [get_one()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.get_one) returns an error.

| Pattern     | Example                          | Description                               |
|-------------|----------------------------------|-------------------------------------------|
| exact       | `comp.get_one(name="unet")`      | exact name match                          |
| wildcard    | `comp.get_one(name="unet*")`     | names starting with "unet"                |
| exclusion   | `comp.get_one(name="!unet")`     | exclude components named "unet"           |
| or          | `comp.get_one(name="unet&#124;vae")`  | name is "unet" or "vae"                   |

[get_one()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.get_one) also filters components by the `collection` argument or `load_id` argument.

```py
comp.get_one(name="unet", collection="sdxl")
```

### get_components_by_names

The [get_components_by_names()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.get_components_by_names) method accepts a list of names and returns a dictionary mapping names to components. This is especially useful with [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) since they provide lists of required component names and the returned dictionary can be passed directly to [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components).

```py
component_dict = comp.get_components_by_names(names=["text_encoder", "unet", "vae"])
{"text_encoder": component1, "unet": component2, "vae": component3}
```

## Duplicate detection

It is recommended to load model components with [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) to assign components with a unique id that encodes their loading parameters. This allows [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) to automatically detect and prevent duplicate model instances even when different objects represent the same underlying checkpoint.

```py
from diffusers import ComponentSpec, ComponentsManager
from transformers import CLIPTextModel

comp = ComponentsManager()

# Create ComponentSpec for the first text encoder
spec = ComponentSpec(name="text_encoder", repo="stabilityai/stable-diffusion-xl-base-1.0", subfolder="text_encoder", type_hint=AutoModel)
# Create ComponentSpec for a duplicate text encoder (it is same checkpoint, from the same repo/subfolder)
spec_duplicated = ComponentSpec(name="text_encoder_duplicated", repo="stabilityai/stable-diffusion-xl-base-1.0", subfolder="text_encoder", type_hint=CLIPTextModel)

# Load and add both components - the manager will detect they're the same model
comp.add("text_encoder", spec.load())
comp.add("text_encoder_duplicated", spec_duplicated.load())
```

This returns a warning with instructions for removing the duplicate.

```py
ComponentsManager: adding component 'text_encoder_duplicated_139917580682672', but it has duplicate load_id 'stabilityai/stable-diffusion-xl-base-1.0|text_encoder|null|null' with existing components: text_encoder_139918506246832. To remove a duplicate, call `components_manager.remove('<component_id>')`.
'text_encoder_duplicated_139917580682672'
```

You could also add a component without using [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec) and duplicate detection still works in most cases even if you're adding the same component under a different name.

However, `ComponentManager` can't detect duplicates when you load the same component into different objects. In this case, you should load a model with [ComponentSpec](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec).

```py
text_encoder_2 = AutoModel.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="text_encoder")
comp.add("text_encoder", text_encoder_2)
'text_encoder_139917732983664'
```

## Collections

Collections are labels assigned to components for better organization and management. Add a component to a collection with the `collection` argument in [add()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.add).

Only one component per name is allowed in each collection. Adding a second component with the same name automatically removes the first component.

```py
from diffusers import ComponentSpec, ComponentsManager

comp = ComponentsManager()
# Create ComponentSpec for the first UNet
spec = ComponentSpec(name="unet", repo="stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", type_hint=AutoModel)
# Create ComponentSpec for a different UNet
spec2 = ComponentSpec(name="unet", repo="RunDiffusion/Juggernaut-XL-v9", subfolder="unet", type_hint=AutoModel, variant="fp16")

# Add both UNets to the same collection - the second one will replace the first
comp.add("unet", spec.load(), collection="sdxl")
comp.add("unet", spec2.load(), collection="sdxl")
```

This makes it convenient to work with node-based systems because you can:

- Mark all models as loaded from one node with the `collection` label.
- Automatically replace models when new checkpoints are loaded under the same name.
- Batch delete all models in a collection when a node is removed.

## Offloading

The [enable_auto_cpu_offload()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager.enable_auto_cpu_offload) method is a global offloading strategy that works across all models regardless of which pipeline is using them. Once enabled, you don't need to worry about device placement if you add or remove components.

```py
comp.enable_auto_cpu_offload(device="cuda")
```

All models begin on the CPU and [ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) moves them to the appropriate device right before they're needed, and moves other models back to the CPU when GPU memory is low.

You can set your own rules for which models to offload first.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/components_manager.md" />

### Quickstart
https://huggingface.co/docs/diffusers/main/modular_diffusers/quickstart.md

# Quickstart

Modular Diffusers is a framework for quickly building flexible and customizable pipelines. At the core of Modular Diffusers are [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) that can be combined with other blocks to adapt to new workflows. The blocks are converted into a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline), a friendly user-facing interface developers can use.

This doc will show you how to implement a [Differential Diffusion](https://differential-diffusion.github.io/) pipeline with the modular framework.

## ModularPipelineBlocks

[ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) are *definitions* that specify the components, inputs, outputs, and computation logic for a single step in a pipeline. There are four types of blocks.

- [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) is the most basic block for a single step.
- `SequentialPipelineBlocks` is a multi-block that composes other blocks linearly. The outputs of one block are the inputs to the next block.
- `LoopSequentialPipelineBlocks` is a multi-block that runs iteratively and is designed for iterative workflows.
- `AutoPipelineBlocks` is a collection of blocks for different workflows and it selects which block to run based on the input. It is designed to conveniently package multiple workflows into a single pipeline.

[Differential Diffusion](https://differential-diffusion.github.io/) is an image-to-image workflow. Start with the `IMAGE2IMAGE_BLOCKS` preset, a collection of `ModularPipelineBlocks` for image-to-image generation.

```py
from diffusers.modular_pipelines.stable_diffusion_xl import IMAGE2IMAGE_BLOCKS
IMAGE2IMAGE_BLOCKS = InsertableDict([
    ("text_encoder", StableDiffusionXLTextEncoderStep),
    ("image_encoder", StableDiffusionXLVaeEncoderStep),
    ("input", StableDiffusionXLInputStep),
    ("set_timesteps", StableDiffusionXLImg2ImgSetTimestepsStep),
    ("prepare_latents", StableDiffusionXLImg2ImgPrepareLatentsStep),
    ("prepare_add_cond", StableDiffusionXLImg2ImgPrepareAdditionalConditioningStep),
    ("denoise", StableDiffusionXLDenoiseStep),
    ("decode", StableDiffusionXLDecodeStep)
])
```

## Pipeline and block states

Modular Diffusers uses *state* to communicate data between blocks. There are two types of states.

- `PipelineState` is a global state that can be used to track all inputs and outputs across all blocks.
- `BlockState` is a local view of relevant variables from `PipelineState` for an individual block.

## Customizing blocks

[Differential Diffusion](https://differential-diffusion.github.io/) differs from standard image-to-image in its `prepare_latents` and `denoise` blocks. All the other blocks can be reused, but you'll need to modify these two.

Create placeholder `ModularPipelineBlocks` for `prepare_latents` and `denoise` by copying and modifying the existing ones.

Print the `denoise` block to see that it is composed of `LoopSequentialPipelineBlocks` with three sub-blocks, `before_denoiser`, `denoiser`, and `after_denoiser`. Only the `before_denoiser` sub-block needs to be modified to prepare the latent input for the denoiser based on the change map.

```py
denoise_blocks = IMAGE2IMAGE_BLOCKS["denoise"]()
print(denoise_blocks)
```

Replace the `StableDiffusionXLLoopBeforeDenoiser` sub-block with the new `SDXLDiffDiffLoopBeforeDenoiser` block.

```py
# Copy existing blocks as placeholders
class SDXLDiffDiffPrepareLatentsStep(ModularPipelineBlocks):
    """Copied from StableDiffusionXLImg2ImgPrepareLatentsStep - will modify later"""
    # ... same implementation as StableDiffusionXLImg2ImgPrepareLatentsStep

class SDXLDiffDiffDenoiseStep(StableDiffusionXLDenoiseLoopWrapper):
    block_classes = [SDXLDiffDiffLoopBeforeDenoiser, StableDiffusionXLLoopDenoiser, StableDiffusionXLLoopAfterDenoiser]
    block_names = ["before_denoiser", "denoiser", "after_denoiser"]
```

### prepare_latents

The `prepare_latents` block requires the following changes.

- a processor to process the change map
- a new `inputs` to accept the user-provided change map, `timestep` for precomputing all the latents and `num_inference_steps` to create the mask for updating the image regions
- update the computation in the `__call__` method for processing the change map and creating the masks, and storing it in the `BlockState`

```diff
class SDXLDiffDiffPrepareLatentsStep(ModularPipelineBlocks):
    @property
    def expected_components(self) -> List[ComponentSpec]:
        return [
            ComponentSpec("vae", AutoencoderKL),
            ComponentSpec("scheduler", EulerDiscreteScheduler),
+           ComponentSpec("mask_processor", VaeImageProcessor, config=FrozenDict({"do_normalize": False, "do_convert_grayscale": True}))
        ]
    @property
    def inputs(self) -> List[Tuple[str, Any]]:
        return [
            InputParam("generator"),
+           InputParam("diffdiff_map", required=True),
-           InputParam("latent_timestep", required=True, type_hint=torch.Tensor),
+           InputParam("timesteps", type_hint=torch.Tensor),
+           InputParam("num_inference_steps", type_hint=int),
        ]

    @property
    def intermediate_outputs(self) -> List[OutputParam]:
        return [
+           OutputParam("original_latents", type_hint=torch.Tensor),
+           OutputParam("diffdiff_masks", type_hint=torch.Tensor),
        ]
    def __call__(self, components, state: PipelineState):
        # ... existing logic ...
+       # Process change map and create masks
+       diffdiff_map = components.mask_processor.preprocess(block_state.diffdiff_map, height=latent_height, width=latent_width)
+       thresholds = torch.arange(block_state.num_inference_steps, dtype=diffdiff_map.dtype) / block_state.num_inference_steps
+       block_state.diffdiff_masks = diffdiff_map > (thresholds + (block_state.denoising_start or 0))
+       block_state.original_latents = block_state.latents
```

### denoise

The `before_denoiser` sub-block requires the following changes.

- a new `inputs` to accept a `denoising_start` parameter,  `original_latents` and `diffdiff_masks` from the `prepare_latents` block
- update the computation in the `__call__` method for applying Differential Diffusion

```diff
class SDXLDiffDiffLoopBeforeDenoiser(ModularPipelineBlocks):
    @property
    def description(self) -> str:
        return (
            "Step within the denoising loop for differential diffusion that prepare the latent input for the denoiser"
        )

    @property
    def inputs(self) -> List[str]:
        return [
            InputParam("latents", required=True, type_hint=torch.Tensor),
+           InputParam("denoising_start"),
+           InputParam("original_latents", type_hint=torch.Tensor),
+           InputParam("diffdiff_masks", type_hint=torch.Tensor),
        ]

    def __call__(self, components, block_state, i, t):
+       # Apply differential diffusion logic
+       if i == 0 and block_state.denoising_start is None:
+           block_state.latents = block_state.original_latents[:1]
+       else:
+           block_state.mask = block_state.diffdiff_masks[i].unsqueeze(0).unsqueeze(1)
+           block_state.latents = block_state.original_latents[i] * block_state.mask + block_state.latents * (1 - block_state.mask)

        # ... rest of existing logic ...
```

## Assembling the blocks

You should have all the blocks you need at this point to create a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline).

Copy the existing `IMAGE2IMAGE_BLOCKS` preset and for the `set_timesteps` block, use the `set_timesteps` from the `TEXT2IMAGE_BLOCKS` because Differential Diffusion doesn't require a `strength` parameter.

Set the `prepare_latents` and `denoise` blocks to the `SDXLDiffDiffPrepareLatentsStep` and `SDXLDiffDiffDenoiseStep` blocks you just modified.

Call `SequentialPipelineBlocks.from_blocks_dict` on the blocks to create a `SequentialPipelineBlocks`.

```py
DIFFDIFF_BLOCKS = IMAGE2IMAGE_BLOCKS.copy()
DIFFDIFF_BLOCKS["set_timesteps"] = TEXT2IMAGE_BLOCKS["set_timesteps"]
DIFFDIFF_BLOCKS["prepare_latents"] = SDXLDiffDiffPrepareLatentsStep
DIFFDIFF_BLOCKS["denoise"] = SDXLDiffDiffDenoiseStep

dd_blocks = SequentialPipelineBlocks.from_blocks_dict(DIFFDIFF_BLOCKS)
print(dd_blocks)
```

## ModularPipeline

Convert the `SequentialPipelineBlocks` into a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) with the `ModularPipeline.init_pipeline` method. This initializes the expected components to load from a `modular_model_index.json` file. Explicitly load the components by calling [ModularPipeline.load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components).

It is a good idea to initialize the `ComponentManager` with the pipeline to help manage the different components. Once you call [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components), the components are registered to the `ComponentManager` and can be shared between workflows. The example below uses the `collection` argument to assign the components a `"diffdiff"` label for better organization.

```py
from diffusers.modular_pipelines import ComponentsManager

components = ComponentManager()

dd_pipeline = dd_blocks.init_pipeline("YiYiXu/modular-demo-auto", components_manager=components, collection="diffdiff")
dd_pipeline.load_default_componenets(torch_dtype=torch.float16)
dd_pipeline.to("cuda")
```

## Adding workflows

Other workflows can be added to the [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) to support additional features without rewriting the entire pipeline from scratch.

This section demonstrates how to add an IP-Adapter or ControlNet.

### IP-Adapter

Stable Diffusion XL already has a preset IP-Adapter block that you can use and doesn't require any changes to the existing Differential Diffusion pipeline.

```py
from diffusers.modular_pipelines.stable_diffusion_xl.encoders import StableDiffusionXLAutoIPAdapterStep

ip_adapter_block = StableDiffusionXLAutoIPAdapterStep()
```

Use the `sub_blocks.insert` method to insert it into the [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline). The example below inserts the `ip_adapter_block` at position `0`. Print the pipeline to see that the `ip_adapter_block` is added and it requires an `ip_adapter_image`. This also added two components to the pipeline, the `image_encoder` and `feature_extractor`.

```py
dd_blocks.sub_blocks.insert("ip_adapter", ip_adapter_block, 0)
```

Call `~ModularPipeline.init_pipeline` to initialize a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) and use [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) to load the model components. Load and set the IP-Adapter to run the pipeline.

```py
dd_pipeline = dd_blocks.init_pipeline("YiYiXu/modular-demo-auto", collection="diffdiff")
dd_pipeline.load_components(torch_dtype=torch.float16)
dd_pipeline.loader.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
dd_pipeline.loader.set_ip_adapter_scale(0.6)
dd_pipeline = dd_pipeline.to(device)

ip_adapter_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/diffdiff_orange.jpeg")
image = load_image("https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/20240329211129_4024911930.png?download=true")
mask = load_image("https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/gradient_mask.png?download=true")

prompt = "a green pear"
negative_prompt = "blurry"
generator = torch.Generator(device=device).manual_seed(42)

image = dd_pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=25,
    generator=generator,
    ip_adapter_image=ip_adapter_image,
    diffdiff_map=mask,
    image=image,
    output="images"
)[0]
```

### ControlNet

Stable Diffusion XL already has a preset ControlNet block that can readily be used.

```py
from diffusers.modular_pipelines.stable_diffusion_xl.modular_blocks import StableDiffusionXLAutoControlNetInputStep

control_input_block = StableDiffusionXLAutoControlNetInputStep()
```

However, it requires modifying the `denoise` block because that's where the ControlNet injects the control information into the UNet.

Modify the `denoise` block by replacing the `StableDiffusionXLLoopDenoiser` sub-block with the `StableDiffusionXLControlNetLoopDenoiser`.

```py
class SDXLDiffDiffControlNetDenoiseStep(StableDiffusionXLDenoiseLoopWrapper):
    block_classes = [SDXLDiffDiffLoopBeforeDenoiser, StableDiffusionXLControlNetLoopDenoiser, StableDiffusionXLDenoiseLoopAfterDenoiser]
    block_names = ["before_denoiser", "denoiser", "after_denoiser"]

controlnet_denoise_block = SDXLDiffDiffControlNetDenoiseStep()
```

Insert the `controlnet_input` block and replace the `denoise` block with the new `controlnet_denoise_block`. Initialize a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) and [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) into it.

```py
dd_blocks.sub_blocks.insert("controlnet_input", control_input_block, 7)
dd_blocks.sub_blocks["denoise"] = controlnet_denoise_block

dd_pipeline = dd_blocks.init_pipeline("YiYiXu/modular-demo-auto", collection="diffdiff")
dd_pipeline.load_components(torch_dtype=torch.float16)
dd_pipeline = dd_pipeline.to(device)

control_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/diffdiff_tomato_canny.jpeg")
image = load_image("https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/20240329211129_4024911930.png?download=true")
mask = load_image("https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/gradient_mask.png?download=true")

prompt = "a green pear"
negative_prompt = "blurry"
generator = torch.Generator(device=device).manual_seed(42)

image = dd_pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=25,
    generator=generator,
    control_image=control_image,
    controlnet_conditioning_scale=0.5,
    diffdiff_map=mask,
    image=image,
    output="images"
)[0]
```

### AutoPipelineBlocks

The Differential Diffusion, IP-Adapter, and ControlNet workflows can be bundled into a single [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) by using `AutoPipelineBlocks`. This allows automatically selecting which sub-blocks to run based on the inputs like `control_image` or `ip_adapter_image`. If none of these inputs are passed, then it defaults to the Differential Diffusion.

Use `block_trigger_inputs` to only run the `SDXLDiffDiffControlNetDenoiseStep` block if a `control_image` input is provided. Otherwise, the `SDXLDiffDiffDenoiseStep` is used.

```py
class SDXLDiffDiffAutoDenoiseStep(AutoPipelineBlocks):
    block_classes = [SDXLDiffDiffControlNetDenoiseStep, SDXLDiffDiffDenoiseStep]
    block_names = ["controlnet_denoise", "denoise"]
    block_trigger_inputs = ["controlnet_cond", None]
```

Add the `ip_adapter` and `controlnet_input` blocks.

```py
DIFFDIFF_AUTO_BLOCKS = IMAGE2IMAGE_BLOCKS.copy()
DIFFDIFF_AUTO_BLOCKS["prepare_latents"] = SDXLDiffDiffPrepareLatentsStep
DIFFDIFF_AUTO_BLOCKS["set_timesteps"] = TEXT2IMAGE_BLOCKS["set_timesteps"]
DIFFDIFF_AUTO_BLOCKS["denoise"] = SDXLDiffDiffAutoDenoiseStep
DIFFDIFF_AUTO_BLOCKS.insert("ip_adapter", StableDiffusionXLAutoIPAdapterStep, 0)
DIFFDIFF_AUTO_BLOCKS.insert("controlnet_input",StableDiffusionXLControlNetAutoInput, 7)
```

Call `SequentialPipelineBlocks.from_blocks_dict` to create a `SequentialPipelineBlocks` and create a [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) and load in the model components to run.

```py
dd_auto_blocks = SequentialPipelineBlocks.from_blocks_dict(DIFFDIFF_AUTO_BLOCKS)
dd_pipeline = dd_auto_blocks.init_pipeline("YiYiXu/modular-demo-auto", collection="diffdiff")
dd_pipeline.load_components(torch_dtype=torch.float16)
```

## Share

Add your [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) to the Hub with [save_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.save_pretrained) and set `push_to_hub` argument to `True`.

```py
dd_pipeline.save_pretrained("YiYiXu/test_modular_doc", push_to_hub=True)
```

Other users can load the [ModularPipeline](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline) with [from_pretrained()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.from_pretrained).

```py
import torch
from diffusers.modular_pipelines import ModularPipeline, ComponentsManager

components = ComponentsManager()

diffdiff_pipeline = ModularPipeline.from_pretrained("YiYiXu/modular-diffdiff-0704", trust_remote_code=True, components_manager=components, collection="diffdiff")
diffdiff_pipeline.load_components(torch_dtype=torch.float16)
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/quickstart.md" />

### Guiders
https://huggingface.co/docs/diffusers/main/modular_diffusers/guiders.md

# Guiders

[Classifier-free guidance](https://huggingface.co/papers/2207.12598) steers model generation that better match a prompt and is commonly used to improve generation quality, control, and adherence to prompts. There are different types of guidance methods, and in Diffusers, they are known as *guiders*. Like blocks, it is easy to switch and use different guiders for different use cases without rewriting the pipeline.

This guide will show you how to switch guiders, adjust guider parameters, and load and share them to the Hub.

## Switching guiders

[ClassifierFreeGuidance](/docs/diffusers/main/en/api/modular_diffusers/guiders#diffusers.ClassifierFreeGuidance) is the default guider and created when a pipeline is initialized with [init_pipeline()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks.init_pipeline). It is created by `from_config` which means it doesn't require loading specifications from a modular repository. A guider won't be listed in `modular_model_index.json`.

Use [get_component_spec()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.get_component_spec) to inspect a guider.

```py
t2i_pipeline.get_component_spec("guider")
ComponentSpec(name='guider', type_hint=<class 'diffusers.guiders.classifier_free_guidance.ClassifierFreeGuidance'>, description=None, config=FrozenDict([('guidance_scale', 7.5), ('guidance_rescale', 0.0), ('use_original_formulation', False), ('start', 0.0), ('stop', 1.0), ('_use_default_values', ['start', 'guidance_rescale', 'stop', 'use_original_formulation'])]), repo=None, subfolder=None, variant=None, revision=None, default_creation_method='from_config')
```

Switch to a different guider by passing the new guider to [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components).

> [!TIP]
> Changing guiders will return text letting you know you're changing the guider type.
> ```bash
> ModularPipeline.update_components: adding guider with new type: PerturbedAttentionGuidance, previous type: ClassifierFreeGuidance
> ```

```py
from diffusers import LayerSkipConfig, PerturbedAttentionGuidance

config = LayerSkipConfig(indices=[2, 9], fqn="mid_block.attentions.0.transformer_blocks", skip_attention=False, skip_attention_scores=True, skip_ff=False)
guider = PerturbedAttentionGuidance(
    guidance_scale=5.0, perturbed_guidance_scale=2.5, perturbed_guidance_config=config
)
t2i_pipeline.update_components(guider=guider)
```

Use [get_component_spec()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.get_component_spec) again to verify the guider type is different.

```py
t2i_pipeline.get_component_spec("guider")
ComponentSpec(name='guider', type_hint=<class 'diffusers.guiders.perturbed_attention_guidance.PerturbedAttentionGuidance'>, description=None, config=FrozenDict([('guidance_scale', 5.0), ('perturbed_guidance_scale', 2.5), ('perturbed_guidance_start', 0.01), ('perturbed_guidance_stop', 0.2), ('perturbed_guidance_layers', None), ('perturbed_guidance_config', LayerSkipConfig(indices=[2, 9], fqn='mid_block.attentions.0.transformer_blocks', skip_attention=False, skip_attention_scores=True, skip_ff=False, dropout=1.0)), ('guidance_rescale', 0.0), ('use_original_formulation', False), ('start', 0.0), ('stop', 1.0), ('_use_default_values', ['perturbed_guidance_start', 'use_original_formulation', 'perturbed_guidance_layers', 'stop', 'start', 'guidance_rescale', 'perturbed_guidance_stop']), ('_class_name', 'PerturbedAttentionGuidance'), ('_diffusers_version', '0.35.0.dev0')]), repo=None, subfolder=None, variant=None, revision=None, default_creation_method='from_config')
```

## Loading custom guiders

Guiders that are already saved on the Hub with a `modular_model_index.json` file are considered a `from_pretrained` component now instead of a `from_config` component.

```json
{
  "guider": [
    null,
    null,
    {
      "repo": "YiYiXu/modular-loader-t2i-guider",
      "revision": null,
      "subfolder": "pag_guider",
      "type_hint": [
        "diffusers",
        "PerturbedAttentionGuidance"
      ],
      "variant": null
    }
  ]
}
```

The guider is only created after calling [load_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.load_components) based on the loading specification in `modular_model_index.json`.

```py
t2i_pipeline = t2i_blocks.init_pipeline("YiYiXu/modular-doc-guider")
# not created during init
assert t2i_pipeline.guider is None
t2i_pipeline.load_components()
# loaded as PAG guider
t2i_pipeline.guider
```


## Changing guider parameters

The guider parameters can be adjusted with either the [create()](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentSpec.create) method or with [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components). The example below changes the `guidance_scale` value.

<hfoptions id="switch">
<hfoption id="create">

```py
guider_spec = t2i_pipeline.get_component_spec("guider")
guider = guider_spec.create(guidance_scale=10)
t2i_pipeline.update_components(guider=guider)
```

</hfoption>
<hfoption id="update_components">

```py
guider_spec = t2i_pipeline.get_component_spec("guider")
guider_spec.config["guidance_scale"] = 10
t2i_pipeline.update_components(guider=guider_spec)
```

</hfoption>
</hfoptions>

## Uploading custom guiders

Call the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method on a custom guider to share it to the Hub.

```py
guider.push_to_hub("YiYiXu/modular-loader-t2i-guider", subfolder="pag_guider")
```

To make this guider available to the pipeline, either modify the `modular_model_index.json` file or use the [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components) method.

<hfoptions id="upload">
<hfoption id="modular_model_index.json">

Edit the `modular_model_index.json` file and add a loading specification for the guider by pointing to a folder containing the guider config.

```json
{
  "guider": [
    "diffusers",
    "PerturbedAttentionGuidance",
    {
      "repo": "YiYiXu/modular-loader-t2i-guider",
      "revision": null,
      "subfolder": "pag_guider",
      "type_hint": [
        "diffusers",
        "PerturbedAttentionGuidance"
      ],
      "variant": null
    }
  ],
```

</hfoption>
<hfoption id="update_components">

Change the `default_creation_method()` to `from_pretrained` and use [update_components()](/docs/diffusers/main/en/api/modular_diffusers/pipeline#diffusers.ModularPipeline.update_components) to update the guider and component specifications as well as the pipeline config.

> [!TIP]
> Changing the creation method will return text letting you know you're changing the creation type to `from_pretrained`.
> ```bash
> ModularPipeline.update_components: changing the default_creation_method of guider from from_config to from_pretrained.
> ```

```py
guider_spec = t2i_pipeline.get_component_spec("guider")
guider_spec.default_creation_method="from_pretrained"
guider_spec.repo="YiYiXu/modular-loader-t2i-guider"
guider_spec.subfolder="pag_guider"
pag_guider = guider_spec.load()
t2i_pipeline.update_components(guider=pag_guider)
```

To make it the default guider for a pipeline, call [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub). This is an optional step and not necessary if you are only experimenting locally.

```py
t2i_pipeline.push_to_hub("YiYiXu/modular-doc-guider")
```

</hfoption>
</hfoptions>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/guiders.md" />

### States
https://huggingface.co/docs/diffusers/main/modular_diffusers/modular_diffusers_states.md

# States

Blocks rely on the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) and [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) data structures for communicating and sharing data.

| State | Description |
|-------|-------------|
| [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) | Maintains the overall data required for a pipeline's execution and allows blocks to read and update its data. |
| [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) | Allows each block to perform its computation with the necessary data from `inputs`|

This guide explains how states work and how they connect blocks.

## PipelineState

The [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) is a global state container for all blocks. It maintains the complete runtime state of the pipeline and provides a structured way for blocks to read from and write to shared data.

There are two dict's in [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) for structuring data.

- The `values` dict is a **mutable** state containing a copy of user provided input values and intermediate output values generated by blocks. If a block modifies an `input`, it will be reflected in the `values` dict after calling `set_block_state`.

```py
PipelineState(
  values={
    'prompt': 'a cat'
    'guidance_scale': 7.0
    'num_inference_steps': 25
    'prompt_embeds': Tensor(dtype=torch.float32, shape=torch.Size([1, 1, 1, 1]))
    'negative_prompt_embeds': None
  },
)
```

## BlockState

The [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) is a local view of the relevant variables an individual block needs from [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) for performing it's computations.

Access these variables directly as attributes like `block_state.image`.

```py
BlockState(
    image: <PIL.Image.Image image mode=RGB size=512x512 at 0x7F3ECC494640>
)
```

When a block's `__call__` method is executed, it retrieves the `BlockState` with `self.get_block_state(state)`, performs it's operations, and updates [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) with `self.set_block_state(state, block_state)`.

```py
def __call__(self, components, state):
    # retrieve BlockState
    block_state = self.get_block_state(state)

    # computation logic on inputs

    # update PipelineState
    self.set_block_state(state, block_state)
    return components, state
```

## State interaction

[PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) and [BlockState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.BlockState) interaction is defined by a block's `inputs`, and `intermediate_outputs`.

- `inputs`, a block can modify an input - like `block_state.image` - and this change can be propagated globally to [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState) by calling `set_block_state`.
- `intermediate_outputs`, is a new variable that a block creates. It is added to the [PipelineState](/docs/diffusers/main/en/api/modular_diffusers/pipeline_states#diffusers.modular_pipelines.PipelineState)'s `values` dict and is available as for subsequent blocks or accessed by users as a final output from the pipeline.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/modular_diffusers/modular_diffusers_states.md" />

### Normalization layers
https://huggingface.co/docs/diffusers/main/api/normalization.md

# Normalization layers

Customized normalization layers for supporting various models in 🤗 Diffusers.

## AdaLayerNorm[[diffusers.models.normalization.AdaLayerNorm]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaLayerNorm</name><anchor>diffusers.models.normalization.AdaLayerNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L28</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "num_embeddings", "val": ": typing.Optional[int] = None"}, {"name": "output_dim", "val": ": typing.Optional[int] = None"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "chunk_dim", "val": ": int = 0"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`, *optional*) -- The size of the embeddings dictionary.
- **output_dim** (`int`, *optional*) --
- **norm_elementwise_affine** (`bool`, defaults to `False) --
- **norm_eps** (`bool`, defaults to `False`) --
- **chunk_dim** (`int`, defaults to `0`) --</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer modified to incorporate timestep embeddings.




</div>

## AdaLayerNormZero[[diffusers.models.normalization.AdaLayerNormZero]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaLayerNormZero</name><anchor>diffusers.models.normalization.AdaLayerNormZero</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L131</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "num_embeddings", "val": ": typing.Optional[int] = None"}, {"name": "norm_type", "val": " = 'layer_norm'"}, {"name": "bias", "val": " = True"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`) -- The size of the embeddings dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive layer norm zero (adaLN-Zero).




</div>

## AdaLayerNormSingle[[diffusers.models.normalization.AdaLayerNormSingle]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaLayerNormSingle</name><anchor>diffusers.models.normalization.AdaLayerNormSingle</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L236</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "use_additional_conditions", "val": ": bool = False"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **use_additional_conditions** (`bool`) -- To use additional conditions for normalization or not.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive layer norm single (adaLN-single).

As proposed in PixArt-Alpha (see: https://huggingface.co/papers/2310.00426; Section 2.3).




</div>

## AdaGroupNorm[[diffusers.models.normalization.AdaGroupNorm]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaGroupNorm</name><anchor>diffusers.models.normalization.AdaGroupNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L270</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "out_dim", "val": ": int"}, {"name": "num_groups", "val": ": int"}, {"name": "act_fn", "val": ": typing.Optional[str] = None"}, {"name": "eps", "val": ": float = 1e-05"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`) -- The size of the embeddings dictionary.
- **num_groups** (`int`) -- The number of groups to separate the channels into.
- **act_fn** (`str`, *optional*, defaults to `None`) -- The activation function to use.
- **eps** (`float`, *optional*, defaults to `1e-5`) -- The epsilon value to use for numerical stability.</paramsdesc><paramgroups>0</paramgroups></docstring>

GroupNorm layer modified to incorporate timestep embeddings.




</div>

## AdaLayerNormContinuous[[diffusers.models.normalization.AdaLayerNormContinuous]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaLayerNormContinuous</name><anchor>diffusers.models.normalization.AdaLayerNormContinuous</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L308</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "conditioning_embedding_dim", "val": ": int"}, {"name": "elementwise_affine", "val": " = True"}, {"name": "eps", "val": " = 1e-05"}, {"name": "bias", "val": " = True"}, {"name": "norm_type", "val": " = 'layer_norm'"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- Embedding dimension to use during projection.
- **conditioning_embedding_dim** (`int`) -- Dimension of the input condition.
- **elementwise_affine** (`bool`, defaults to `True`) --
  Boolean flag to denote if affine transformation should be applied.
- **eps** (`float`, defaults to 1e-5) -- Epsilon factor.
- **bias** (`bias`, defaults to `True`) -- Boolean flag to denote if bias should be use.
- **norm_type** (`str`, defaults to `"layer_norm"`) --
  Normalization layer to use. Values supported: "layer_norm", "rms_norm".</paramsdesc><paramgroups>0</paramgroups></docstring>

Adaptive normalization layer with a norm layer (layer_norm or rms_norm).




</div>

## RMSNorm[[diffusers.models.normalization.RMSNorm]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.RMSNorm</name><anchor>diffusers.models.normalization.RMSNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L511</source><parameters>[{"name": "dim", "val": ""}, {"name": "eps", "val": ": float"}, {"name": "elementwise_affine", "val": ": bool = True"}, {"name": "bias", "val": ": bool = False"}]</parameters><paramsdesc>- **dim** (`int`) -- Number of dimensions to use for `weights`. Only effective when `elementwise_affine` is True.
- **eps** (`float`) -- Small value to use when calculating the reciprocal of the square-root.
- **elementwise_affine** (`bool`, defaults to `True`) --
  Boolean flag to denote if affine transformation should be applied.
- **bias** (`bool`, defaults to False) -- If also training the `bias` param.</paramsdesc><paramgroups>0</paramgroups></docstring>

RMS Norm as introduced in https://huggingface.co/papers/1910.07467 by Zhang et al.




</div>

## GlobalResponseNorm[[diffusers.models.normalization.GlobalResponseNorm]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.GlobalResponseNorm</name><anchor>diffusers.models.normalization.GlobalResponseNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L601</source><parameters>[{"name": "dim", "val": ""}]</parameters><paramsdesc>- **dim** (`int`) -- Number of dimensions to use for the `gamma` and `beta`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Global response normalization as introduced in ConvNeXt-v2 (https://huggingface.co/papers/2301.00808).




</div>

## LuminaLayerNormContinuous[[diffusers.models.normalization.LuminaLayerNormContinuous]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.LuminaLayerNormContinuous</name><anchor>diffusers.models.normalization.LuminaLayerNormContinuous</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L355</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "conditioning_embedding_dim", "val": ": int"}, {"name": "elementwise_affine", "val": " = True"}, {"name": "eps", "val": " = 1e-05"}, {"name": "bias", "val": " = True"}, {"name": "norm_type", "val": " = 'layer_norm'"}, {"name": "out_dim", "val": ": typing.Optional[int] = None"}]</parameters></docstring>


</div>

## SD35AdaLayerNormZeroX[[diffusers.models.normalization.SD35AdaLayerNormZeroX]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.SD35AdaLayerNormZeroX</name><anchor>diffusers.models.normalization.SD35AdaLayerNormZeroX</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L97</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "norm_type", "val": ": str = 'layer_norm'"}, {"name": "bias", "val": ": bool = True"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`) -- The size of the embeddings dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive layer norm zero (AdaLN-Zero).




</div>

## AdaLayerNormZeroSingle[[diffusers.models.normalization.AdaLayerNormZeroSingle]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.AdaLayerNormZeroSingle</name><anchor>diffusers.models.normalization.AdaLayerNormZeroSingle</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L174</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "norm_type", "val": " = 'layer_norm'"}, {"name": "bias", "val": " = True"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`) -- The size of the embeddings dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive layer norm zero (adaLN-Zero).




</div>

## LuminaRMSNormZero[[diffusers.models.normalization.LuminaRMSNormZero]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.LuminaRMSNormZero</name><anchor>diffusers.models.normalization.LuminaRMSNormZero</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L206</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "norm_eps", "val": ": float"}, {"name": "norm_elementwise_affine", "val": ": bool"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive RMS normalization zero.




</div>

## LpNorm[[diffusers.models.normalization.LpNorm]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.LpNorm</name><anchor>diffusers.models.normalization.LpNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L621</source><parameters>[{"name": "p", "val": ": int = 2"}, {"name": "dim", "val": ": int = -1"}, {"name": "eps", "val": ": float = 1e-12"}]</parameters></docstring>


</div>

## CogView3PlusAdaLayerNormZeroTextImage[[diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImage]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImage</name><anchor>diffusers.models.normalization.CogView3PlusAdaLayerNormZeroTextImage</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L404</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "dim", "val": ": int"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.
- **num_embeddings** (`int`) -- The size of the embeddings dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

Norm layer adaptive layer norm zero (adaLN-Zero).




</div>

## CogVideoXLayerNormZero[[diffusers.models.normalization.CogVideoXLayerNormZero]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.CogVideoXLayerNormZero</name><anchor>diffusers.models.normalization.CogVideoXLayerNormZero</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L449</source><parameters>[{"name": "conditioning_dim", "val": ": int"}, {"name": "embedding_dim", "val": ": int"}, {"name": "elementwise_affine", "val": ": bool = True"}, {"name": "eps", "val": ": float = 1e-05"}, {"name": "bias", "val": ": bool = True"}]</parameters></docstring>


</div>

## MochiRMSNormZero[[diffusers.models.transformers.transformer_mochi.MochiRMSNormZero]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.transformers.transformer_mochi.MochiRMSNormZero</name><anchor>diffusers.models.transformers.transformer_mochi.MochiRMSNormZero</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_mochi.py#L88</source><parameters>[{"name": "embedding_dim", "val": ": int"}, {"name": "hidden_dim", "val": ": int"}, {"name": "eps", "val": ": float = 1e-05"}, {"name": "elementwise_affine", "val": ": bool = False"}]</parameters><paramsdesc>- **embedding_dim** (`int`) -- The size of each embedding vector.</paramsdesc><paramgroups>0</paramgroups></docstring>

Adaptive RMS Norm used in Mochi.




</div>

## MochiRMSNorm[[diffusers.models.normalization.MochiRMSNorm]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.normalization.MochiRMSNorm</name><anchor>diffusers.models.normalization.MochiRMSNorm</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/normalization.py#L573</source><parameters>[{"name": "dim", "val": ""}, {"name": "eps", "val": ": float"}, {"name": "elementwise_affine", "val": ": bool = True"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/normalization.md" />

### Configuration
https://huggingface.co/docs/diffusers/main/api/configuration.md

# Configuration

Schedulers from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and models from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin) inherit from [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin) which stores all the parameters that are passed to their respective `__init__` methods in a JSON-configuration file.

> [!TIP]
> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.

## ConfigMixin[[diffusers.ConfigMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ConfigMixin</name><anchor>diffusers.ConfigMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L88</source><parameters>[]</parameters></docstring>

Base class for all configuration classes. All configuration parameters are stored under `self.config`. Also
provides the [from_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) and [save_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.save_config) methods for loading, downloading, and
saving classes that inherit from [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin).

Class attributes:
- **config_name** (`str`) -- A filename under which the config should stored when calling
  [save_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.save_config) (should be overridden by parent class).
- **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be
  overridden by subclass).
- **has_compatibles** (`bool`) -- Whether the class has compatible classes (should be overridden by subclass).
- **_deprecated_kwargs** (`List[str]`) -- Keyword arguments that are deprecated. Note that the `init` function
  should only have a `kwargs` argument if at least one argument is deprecated (should be overridden by
  subclass).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_config</name><anchor>diffusers.ConfigMixin.load_config</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L291</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike]"}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "return_commit_hash", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing model weights saved with
    [save_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.save_config).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False) --
  Whether unused keyword arguments of the config are returned.
- **return_commit_hash** (`bool`, *optional*, defaults to `False) --
  Whether the `commit_hash` of the loaded configuration are returned.</paramsdesc><paramgroups>0</paramgroups><rettype>`dict`</rettype><retdesc>A dictionary of all the parameters stored in a JSON configuration file.</retdesc></docstring>

Load a model or scheduler configuration.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_config</name><anchor>diffusers.ConfigMixin.from_config</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L190</source><parameters>[{"name": "config", "val": ": typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None"}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **config** (`Dict[str, Any]`) --
  A config dictionary from which the Python class is instantiated. Make sure to only load configuration
  files of compatible classes.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  Whether kwargs that are not consumed by the Python class should be returned or not.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to update the configuration object (after it is loaded) and initiate the Python class.
  `**kwargs` are passed directly to the underlying scheduler/model's `__init__` method and eventually
  overwrite the same named arguments in `config`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin) or [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)</rettype><retdesc>A model or scheduler object instantiated from a config dictionary.</retdesc></docstring>

Instantiate a Python class from a config dictionary.







<ExampleCodeBlock anchor="diffusers.ConfigMixin.from_config.example">

Examples:

```python
>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler

>>> # Download scheduler from huggingface.co and cache.
>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32")

>>> # Instantiate DDIM scheduler class with same config as DDPM
>>> scheduler = DDIMScheduler.from_config(scheduler.config)

>>> # Instantiate PNDM scheduler class with same config as DDPM
>>> scheduler = PNDMScheduler.from_config(scheduler.config)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_config</name><anchor>diffusers.ConfigMixin.save_config</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L146</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory where the configuration JSON file is saved (will be created if it does not exist).
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
  repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
  namespace).
- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save a configuration object to the directory specified in `save_directory` so that it can be reloaded using the
[from_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) class method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_json_file</name><anchor>diffusers.ConfigMixin.to_json_file</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L628</source><parameters>[{"name": "json_file_path", "val": ": typing.Union[str, os.PathLike]"}]</parameters><paramsdesc>- **json_file_path** (`str` or `os.PathLike`) --
  Path to the JSON file to save a configuration instance's parameters.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the configuration instance's parameters to a JSON file.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_json_string</name><anchor>diffusers.ConfigMixin.to_json_string</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/configuration_utils.py#L589</source><parameters>[]</parameters><rettype>`str`</rettype><retdesc>String containing all the attributes that make up the configuration instance in JSON format.</retdesc></docstring>

Serializes the configuration instance to a JSON string.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/configuration.md" />

### Overview
https://huggingface.co/docs/diffusers/main/api/internal_classes_overview.md

# Overview

The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you're interested in building a diffusion model with some custom parts or if you're interested in some of our helper utilities for working with 🤗 Diffusers.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/internal_classes_overview.md" />

### Logging
https://huggingface.co/docs/diffusers/main/api/logging.md

# Logging

🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to `WARNING`.

To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the `INFO` level.

```python
import diffusers

diffusers.logging.set_verbosity_info()
```

You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:

```bash
DIFFUSERS_VERBOSITY=error ./myprogram.py
```

Additionally, some `warnings` can be disabled by setting the environment variable
`DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like `1`. This disables any warning logged by
`logger.warning_advice`. For example:

```bash
DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```

Here is an example of how to use the same logger as the library in your own module or script:

```python
from diffusers.utils import logging

logging.set_verbosity_info()
logger = logging.get_logger("diffusers")
logger.info("INFO")
logger.warning("WARN")
```


All methods of the logging module are documented below. The main methods are
`logging.get_verbosity` to get the current level of verbosity in the logger and
`logging.set_verbosity` to set the verbosity to the level of your choice.

In order from the least verbose to the most verbose:

|                                                    Method | Integer value |                                         Description |
|----------------------------------------------------------:|--------------:|----------------------------------------------------:|
| `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` |            50 |                only report the most critical errors |
|                                 `diffusers.logging.ERROR` |            40 |                                  only report errors |
|   `diffusers.logging.WARNING` or `diffusers.logging.WARN` |            30 |           only report errors and warnings (default) |
|                                  `diffusers.logging.INFO` |            20 | only report errors, warnings, and basic information |
|                                 `diffusers.logging.DEBUG` |            10 |                              report all information |

By default, `tqdm` progress bars are displayed during model download. `logging.disable_progress_bar` and `logging.enable_progress_bar` are used to enable or disable this behavior.

## Base setters[[diffusers.utils.logging.set_verbosity_error]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.set_verbosity_error</name><anchor>diffusers.utils.logging.set_verbosity_error</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L180</source><parameters>[]</parameters></docstring>
Set the verbosity to the `ERROR` level.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.set_verbosity_warning</name><anchor>diffusers.utils.logging.set_verbosity_warning</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L170</source><parameters>[]</parameters></docstring>
Set the verbosity to the `WARNING` level.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.set_verbosity_info</name><anchor>diffusers.utils.logging.set_verbosity_info</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L165</source><parameters>[]</parameters></docstring>
Set the verbosity to the `INFO` level.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.set_verbosity_debug</name><anchor>diffusers.utils.logging.set_verbosity_debug</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L175</source><parameters>[]</parameters></docstring>
Set the verbosity to the `DEBUG` level.

</div>

## Other functions[[diffusers.utils.logging.get_verbosity]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.get_verbosity</name><anchor>diffusers.utils.logging.get_verbosity</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L126</source><parameters>[]</parameters><rettype>`int`</rettype><retdesc>Logging level integers which can be one of:

- `50`: `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
- `40`: `diffusers.logging.ERROR`
- `30`: `diffusers.logging.WARNING` or `diffusers.logging.WARN`
- `20`: `diffusers.logging.INFO`
- `10`: `diffusers.logging.DEBUG`</retdesc></docstring>

Return the current level for the 🤗 Diffusers' root logger as an `int`.






</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.set_verbosity</name><anchor>diffusers.utils.logging.set_verbosity</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L146</source><parameters>[{"name": "verbosity", "val": ": int"}]</parameters><paramsdesc>- **verbosity** (`int`) --
  Logging level which can be one of:

  - `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL`
  - `diffusers.logging.ERROR`
  - `diffusers.logging.WARNING` or `diffusers.logging.WARN`
  - `diffusers.logging.INFO`
  - `diffusers.logging.DEBUG`</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the verbosity level for the 🤗 Diffusers' root logger.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.get_logger</name><anchor>diffusers.utils.get_logger</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L112</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Return a logger with the specified name.

This function is not supposed to be directly accessed unless you are writing a custom diffusers module.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.enable_default_handler</name><anchor>diffusers.utils.logging.enable_default_handler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L194</source><parameters>[]</parameters></docstring>
Enable the default handler of the 🤗 Diffusers' root logger.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.disable_default_handler</name><anchor>diffusers.utils.logging.disable_default_handler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L185</source><parameters>[]</parameters></docstring>
Disable the default handler of the 🤗 Diffusers' root logger.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.enable_explicit_format</name><anchor>diffusers.utils.logging.enable_explicit_format</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L240</source><parameters>[]</parameters></docstring>

<ExampleCodeBlock anchor="diffusers.utils.logging.enable_explicit_format.example">

Enable explicit formatting for every 🤗 Diffusers' logger. The explicit formatter is as follows:
```
[LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
```

</ExampleCodeBlock>
All handlers currently bound to the root logger are affected by this method.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.reset_format</name><anchor>diffusers.utils.logging.reset_format</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L255</source><parameters>[]</parameters></docstring>

Resets the formatting for 🤗 Diffusers' loggers.

All handlers currently bound to the root logger are affected by this method.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.enable_progress_bar</name><anchor>diffusers.utils.logging.enable_progress_bar</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L331</source><parameters>[]</parameters></docstring>
Enable tqdm progress bar.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.logging.disable_progress_bar</name><anchor>diffusers.utils.logging.disable_progress_bar</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/logging.py#L337</source><parameters>[]</parameters></docstring>
Disable tqdm progress bar.

</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/logging.md" />

### Quantization
https://huggingface.co/docs/diffusers/main/api/quantization.md

# Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.

> [!TIP]
> Learn how to quantize models in the [Quantization](../quantization/overview) guide.

## PipelineQuantizationConfig[[diffusers.PipelineQuantizationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PipelineQuantizationConfig</name><anchor>diffusers.PipelineQuantizationConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/pipe_quant_config.py#L33</source><parameters>[{"name": "quant_backend", "val": ": str = None"}, {"name": "quant_kwargs", "val": ": typing.Dict[str, typing.Union[str, float, int, dict]] = None"}, {"name": "components_to_quantize", "val": ": typing.Union[typing.List[str], str, NoneType] = None"}, {"name": "quant_mapping", "val": ": typing.Dict[str, typing.Union[diffusers.quantizers.quantization_config.QuantizationConfigMixin, ForwardRef('TransformersQuantConfigMixin')]] = None"}]</parameters><paramsdesc>- **quant_backend** (`str`) -- Quantization backend to be used. When using this option, we assume that the backend
  is available to both `diffusers` and `transformers`.
- **quant_kwargs** (`dict`) -- Params to initialize the quantization backend class.
- **components_to_quantize** (`list`) -- Components of a pipeline to be quantized.
- **quant_mapping** (`dict`) -- Mapping defining the quantization specs to be used for the pipeline
  components. When using this argument, users are not expected to provide `quant_backend`, `quant_kawargs`,
  and `components_to_quantize`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration class to be used when applying quantization on-the-fly to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).




</div>

## BitsAndBytesConfig[[diffusers.BitsAndBytesConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.BitsAndBytesConfig</name><anchor>diffusers.BitsAndBytesConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L180</source><parameters>[{"name": "load_in_8bit", "val": " = False"}, {"name": "load_in_4bit", "val": " = False"}, {"name": "llm_int8_threshold", "val": " = 6.0"}, {"name": "llm_int8_skip_modules", "val": " = None"}, {"name": "llm_int8_enable_fp32_cpu_offload", "val": " = False"}, {"name": "llm_int8_has_fp16_weight", "val": " = False"}, {"name": "bnb_4bit_compute_dtype", "val": " = None"}, {"name": "bnb_4bit_quant_type", "val": " = 'fp4'"}, {"name": "bnb_4bit_use_double_quant", "val": " = False"}, {"name": "bnb_4bit_quant_storage", "val": " = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **load_in_8bit** (`bool`, *optional*, defaults to `False`) --
  This flag is used to enable 8-bit quantization with LLM.int8().
- **load_in_4bit** (`bool`, *optional*, defaults to `False`) --
  This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers from
  `bitsandbytes`.
- **llm_int8_threshold** (`float`, *optional*, defaults to 6.0) --
  This corresponds to the outlier threshold for outlier detection as described in `LLM.int8() : 8-bit Matrix
  Multiplication for Transformers at Scale` paper: https://huggingface.co/papers/2208.07339 Any hidden states
  value that is above this threshold will be considered an outlier and the operation on those values will be
  done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5],
  but there are some exceptional systematic outliers that are very differently distributed for large models.
  These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of
  magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6,
  but a lower threshold might be needed for more unstable models (small models, fine-tuning).
- **llm_int8_skip_modules** (`List[str]`, *optional*) --
  An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as
  Jukebox that has several heads in different places and not necessarily at the last position. For example
  for `CausalLM` models, the last `lm_head` is typically kept in its original `dtype`.
- **llm_int8_enable_fp32_cpu_offload** (`bool`, *optional*, defaults to `False`) --
  This flag is used for advanced use cases and users that are aware of this feature. If you want to split
  your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use
  this flag. This is useful for offloading large models such as `google/flan-t5-xxl`. Note that the int8
  operations will not be run on CPU.
- **llm_int8_has_fp16_weight** (`bool`, *optional*, defaults to `False`) --
  This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not
  have to be converted back and forth for the backward pass.
- **bnb_4bit_compute_dtype** (`torch.dtype` or str, *optional*, defaults to `torch.float32`) --
  This sets the computational type which might be different than the input type. For example, inputs might be
  fp32, but computation can be set to bf16 for speedups.
- **bnb_4bit_quant_type** (`str`,  *optional*, defaults to `"fp4"`) --
  This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types
  which are specified by `fp4` or `nf4`.
- **bnb_4bit_use_double_quant** (`bool`, *optional*, defaults to `False`) --
  This flag is used for nested quantization where the quantization constants from the first quantization are
  quantized again.
- **bnb_4bit_quant_storage** (`torch.dtype` or str, *optional*, defaults to `torch.uint8`) --
  This sets the storage type to pack the quanitzed 4-bit prarams.
- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional parameters from which to initialize the configuration object.</paramsdesc><paramgroups>0</paramgroups></docstring>

This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `bitsandbytes`.

This replaces `load_in_8bit` or `load_in_4bit` therefore both options are mutually exclusive.

Currently only supports `LLM.int8()`, `FP4`, and `NF4` quantization. If more methods are added to `bitsandbytes`,
then more arguments will be added to this class.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_quantizable</name><anchor>diffusers.BitsAndBytesConfig.is_quantizable</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L359</source><parameters>[]</parameters></docstring>

Returns `True` if the model is quantizable, `False` otherwise.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>post_init</name><anchor>diffusers.BitsAndBytesConfig.post_init</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L322</source><parameters>[]</parameters></docstring>

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>quantization_method</name><anchor>diffusers.BitsAndBytesConfig.quantization_method</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L365</source><parameters>[]</parameters></docstring>

This method returns the quantization method used for the model. If the model is not quantizable, it returns
`None`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_diff_dict</name><anchor>diffusers.BitsAndBytesConfig.to_diff_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L396</source><parameters>[]</parameters><rettype>`Dict[str, Any]`</rettype><retdesc>Dictionary of all the attributes that make up this configuration instance,</retdesc></docstring>

Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.






</div></div>

## GGUFQuantizationConfig[[diffusers.GGUFQuantizationConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.GGUFQuantizationConfig</name><anchor>diffusers.GGUFQuantizationConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L420</source><parameters>[{"name": "compute_dtype", "val": ": typing.Optional[ForwardRef('torch.dtype')] = None"}]</parameters><paramsdesc>- **compute_dtype** -- (`torch.dtype`, defaults to `torch.float32`):
  This sets the computational type which might be different than the input type. For example, inputs might be
  fp32, but computation can be set to bf16 for speedups.</paramsdesc><paramgroups>0</paramgroups></docstring>
This is a config class for GGUF Quantization techniques.




</div>

## QuantoConfig[[diffusers.QuantoConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QuantoConfig</name><anchor>diffusers.QuantoConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L816</source><parameters>[{"name": "weights_dtype", "val": ": str = 'int8'"}, {"name": "modules_to_not_convert", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **weights_dtype** (`str`, *optional*, defaults to `"int8"`) --
  The target dtype for the weights after quantization. Supported values are ("float8","int8","int4","int2")</paramsdesc><paramgroups>0</paramgroups></docstring>

This is a wrapper class about all possible attributes and features that you can play with a model that has been
loaded using `quanto`.



modules_to_not_convert (`list`, *optional*, default to `None`):
The list of modules to not quantize, useful for quantizing models that explicitly require to have some
modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>post_init</name><anchor>diffusers.QuantoConfig.post_init</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L841</source><parameters>[]</parameters></docstring>

Safety checker that arguments are correct


</div></div>

## TorchAoConfig[[diffusers.TorchAoConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TorchAoConfig</name><anchor>diffusers.TorchAoConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L443</source><parameters>[{"name": "quant_type", "val": ": typing.Union[str, ForwardRef('AOBaseConfig')]"}, {"name": "modules_to_not_convert", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **quant_type** (Union[`str`, AOBaseConfig]) --
  The type of quantization we want to use, currently supporting:
  - **Integer quantization:**
    - Full function names: `int4_weight_only`, `int8_dynamic_activation_int4_weight`,
      `int8_weight_only`, `int8_dynamic_activation_int8_weight`
    - Shorthands: `int4wo`, `int4dq`, `int8wo`, `int8dq`

  - **Floating point 8-bit quantization:**
    - Full function names: `float8_weight_only`, `float8_dynamic_activation_float8_weight`,
      `float8_static_activation_float8_weight`
    - Shorthands: `float8wo`, `float8wo_e5m2`, `float8wo_e4m3`, `float8dq`, `float8dq_e4m3`,
      `float8_e4m3_tensor`, `float8_e4m3_row`,

  - **Floating point X-bit quantization:**
    - Full function names: `fpx_weight_only`
    - Shorthands: `fpX_eAwB`, where `X` is the number of bits (between `1` to `7`), `A` is the number
      of exponent bits and `B` is the number of mantissa bits. The constraint of `X == A + B + 1` must
      be satisfied for a given shorthand notation.

  - **Unsigned Integer quantization:**
    - Full function names: `uintx_weight_only`
    - Shorthands: `uint1wo`, `uint2wo`, `uint3wo`, `uint4wo`, `uint5wo`, `uint6wo`, `uint7wo`
  - An AOBaseConfig instance: for more advanced configuration options.
- **modules_to_not_convert** (`List[str]`, *optional*, default to `None`) --
  The list of modules to not quantize, useful for quantizing models that explicitly require to have some
  modules left in their original precision.
- **kwargs** (`Dict[str, Any]`, *optional*) --
  The keyword arguments for the chosen type of quantization, for example, int4_weight_only quantization
  supports two keyword arguments `group_size` and `inner_k_tiles` currently. More API examples and
  documentation of arguments can be found in
  https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques</paramsdesc><paramgroups>0</paramgroups></docstring>
This is a config class for torchao quantization/sparsity techniques.



<ExampleCodeBlock anchor="diffusers.TorchAoConfig.example">

Example:
```python
from diffusers import FluxTransformer2DModel, TorchAoConfig

# AOBaseConfig-based configuration
from torchao.quantization import Int8WeightOnlyConfig

quantization_config = TorchAoConfig(Int8WeightOnlyConfig())

# String-based config
quantization_config = TorchAoConfig("int8wo")
transformer = FluxTransformer2DModel.from_pretrained(
    "black-forest-labs/Flux.1-Dev",
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=torch.bfloat16,
)
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_dict</name><anchor>diffusers.TorchAoConfig.from_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L589</source><parameters>[{"name": "config_dict", "val": ""}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Create configuration from a dictionary.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_apply_tensor_subclass</name><anchor>diffusers.TorchAoConfig.get_apply_tensor_subclass</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L760</source><parameters>[]</parameters></docstring>
Create the appropriate quantization method based on configuration.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>diffusers.TorchAoConfig.to_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/quantization_config.py#L561</source><parameters>[]</parameters></docstring>
Convert configuration to a dictionary.

</div></div>

## DiffusersQuantizer[[diffusers.DiffusersQuantizer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DiffusersQuantizer</name><anchor>diffusers.DiffusersQuantizer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L34</source><parameters>[{"name": "quantization_config", "val": ": QuantizationConfigMixin"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Abstract class of the HuggingFace quantizer. Supports for now quantizing HF diffusers models for inference and/or
quantization. This class is used only for diffusers.models.modeling_utils.ModelMixin.from_pretrained and cannot be
easily used outside the scope of that method yet.

Attributes
quantization_config (`diffusers.quantizers.quantization_config.QuantizationConfigMixin`):
The quantization config that defines the quantization parameters of your model that you want to quantize.
modules_to_not_convert (`List[str]`, *optional*):
The list of module names to not convert when quantizing the model.
required_packages (`List[str]`, *optional*):
The list of required pip packages to install prior to using the quantizer
requires_calibration (`bool`):
Whether the quantization method requires to calibrate the model before using it.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>adjust_max_memory</name><anchor>diffusers.DiffusersQuantizer.adjust_max_memory</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L133</source><parameters>[{"name": "max_memory", "val": ": typing.Dict[str, typing.Union[int, str]]"}]</parameters></docstring>
adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>adjust_target_dtype</name><anchor>diffusers.DiffusersQuantizer.adjust_target_dtype</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L91</source><parameters>[{"name": "torch_dtype", "val": ": torch.dtype"}]</parameters><paramsdesc>- **torch_dtype** (`torch.dtype`, *optional*) --
  The torch_dtype that is used to compute the device_map.</paramsdesc><paramgroups>0</paramgroups></docstring>

Override this method if you want to adjust the `target_dtype` variable used in `from_pretrained` to compute the
device_map in case the device_map is a `str`. E.g. for bitsandbytes we force-set `target_dtype` to `torch.int8`
and for 4-bit we pass a custom enum `accelerate.CustomDtype.int4`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>check_if_quantized_param</name><anchor>diffusers.DiffusersQuantizer.check_if_quantized_param</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L137</source><parameters>[{"name": "model", "val": ": ModelMixin"}, {"name": "param_value", "val": ": torch.Tensor"}, {"name": "param_name", "val": ": str"}, {"name": "state_dict", "val": ": typing.Dict[str, typing.Any]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

checks if a loaded state_dict component is part of quantized param + some validation; only defined for
quantization methods that require to create a new parameters for quantization.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>check_quantized_param_shape</name><anchor>diffusers.DiffusersQuantizer.check_quantized_param_shape</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L157</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

checks if the quantized param has expected shape.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create_quantized_param</name><anchor>diffusers.DiffusersQuantizer.create_quantized_param</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L151</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

takes needed components from state_dict and creates quantized param.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dequantize</name><anchor>diffusers.DiffusersQuantizer.dequantize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L200</source><parameters>[{"name": "model", "val": ""}]</parameters></docstring>

Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance. Note
not all quantization schemes support this.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_cuda_warm_up_factor</name><anchor>diffusers.DiffusersQuantizer.get_cuda_warm_up_factor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L212</source><parameters>[]</parameters></docstring>

The factor to be used in `caching_allocator_warmup` to get the number of bytes to pre-allocate to warm up cuda.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory of the weights residing in the empty model, etc...


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_special_dtypes_update</name><anchor>diffusers.DiffusersQuantizer.get_special_dtypes_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L113</source><parameters>[{"name": "model", "val": ""}, {"name": "torch_dtype", "val": ": torch.dtype"}]</parameters><paramsdesc>- **model** (`~diffusers.models.modeling_utils.ModelMixin`) --
  The model to quantize
- **torch_dtype** (`torch.dtype`) --
  The dtype passed in `from_pretrained` method.</paramsdesc><paramgroups>0</paramgroups></docstring>

returns dtypes for modules that are not quantized - used for the computation of the device_map in case one
passes a str as a device_map. The method will use the `modules_to_not_convert` that is modified in
`_process_model_before_weight_loading`. `diffusers` models don't have any `modules_to_not_convert` attributes
yet but this can change soon in the future.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>postprocess_model</name><anchor>diffusers.DiffusersQuantizer.postprocess_model</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L187</source><parameters>[{"name": "model", "val": ": ModelMixin"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** (`~diffusers.models.modeling_utils.ModelMixin`) --
  The model to quantize
- **kwargs** (`dict`, *optional*) --
  The keyword arguments that are passed along `_process_model_after_weight_loading`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Post-process the model post weights loading. Make sure to override the abstract method
`_process_model_after_weight_loading`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess_model</name><anchor>diffusers.DiffusersQuantizer.preprocess_model</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L171</source><parameters>[{"name": "model", "val": ": ModelMixin"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model** (`~diffusers.models.modeling_utils.ModelMixin`) --
  The model to quantize
- **kwargs** (`dict`, *optional*) --
  The keyword arguments that are passed along `_process_model_before_weight_loading`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Setting model attributes and/or converting model before weights loading. At this point the model should be
initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace
modules in-place. Make sure to override the abstract method `_process_model_before_weight_loading`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_device_map</name><anchor>diffusers.DiffusersQuantizer.update_device_map</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L79</source><parameters>[{"name": "device_map", "val": ": typing.Optional[typing.Dict[str, typing.Any]]"}]</parameters><paramsdesc>- **device_map** (`Union[dict, str]`, *optional*) --
  The device_map that is passed through the `from_pretrained` method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Override this method if you want to pass a override the existing device map with a new one. E.g. for
bitsandbytes, since `accelerate` is a hard requirement, if no device_map is passed, the device_map is set to
`"auto"``




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_missing_keys</name><anchor>diffusers.DiffusersQuantizer.update_missing_keys</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L103</source><parameters>[{"name": "model", "val": ""}, {"name": "missing_keys", "val": ": typing.List[str]"}, {"name": "prefix", "val": ": str"}]</parameters><paramsdesc>- **missing_keys** (`List[str]`, *optional*) --
  The list of missing keys in the checkpoint compared to the state dict of the model</paramsdesc><paramgroups>0</paramgroups></docstring>

Override this method if you want to adjust the `missing_keys`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_torch_dtype</name><anchor>diffusers.DiffusersQuantizer.update_torch_dtype</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L68</source><parameters>[{"name": "torch_dtype", "val": ": torch.dtype"}]</parameters><paramsdesc>- **torch_dtype** (`torch.dtype`) --
  The input dtype that is passed in `from_pretrained`</paramsdesc><paramgroups>0</paramgroups></docstring>

Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to
override this method in case you want to make sure that behavior is preserved




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>validate_environment</name><anchor>diffusers.DiffusersQuantizer.validate_environment</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/quantizers/base.py#L163</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

This method is used to potentially check for potential conflicts with arguments that are passed in
`from_pretrained`. You need to define it for all future quantizers that are integrated with diffusers. If no
explicit check are needed, simply return nothing.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/quantization.md" />

### Parallelism
https://huggingface.co/docs/diffusers/main/api/parallel.md

# Parallelism

Parallelism strategies help speed up diffusion transformers by distributing computations across multiple devices, allowing for faster inference/training times. Refer to the [Distributed inferece](../training/distributed_inference) guide to learn more.

## ParallelConfig[[diffusers.ParallelConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ParallelConfig</name><anchor>diffusers.ParallelConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/_modeling_parallel.py#L108</source><parameters>[{"name": "context_parallel_config", "val": ": typing.Optional[diffusers.models._modeling_parallel.ContextParallelConfig] = None"}, {"name": "_rank", "val": ": int = None"}, {"name": "_world_size", "val": ": int = None"}, {"name": "_device", "val": ": device = None"}, {"name": "_cp_mesh", "val": ": DeviceMesh = None"}]</parameters><paramsdesc>- **context_parallel_config** (`ContextParallelConfig`, *optional*) --
  Configuration for context parallelism.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for applying different parallelisms.




</div>

## ContextParallelConfig[[diffusers.ContextParallelConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ContextParallelConfig</name><anchor>diffusers.ContextParallelConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/_modeling_parallel.py#L41</source><parameters>[{"name": "ring_degree", "val": ": typing.Optional[int] = None"}, {"name": "ulysses_degree", "val": ": typing.Optional[int] = None"}, {"name": "convert_to_fp32", "val": ": bool = True"}, {"name": "rotate_method", "val": ": typing.Literal['allgather', 'alltoall'] = 'allgather'"}, {"name": "_rank", "val": ": int = None"}, {"name": "_world_size", "val": ": int = None"}, {"name": "_device", "val": ": device = None"}, {"name": "_mesh", "val": ": DeviceMesh = None"}, {"name": "_flattened_mesh", "val": ": DeviceMesh = None"}, {"name": "_ring_mesh", "val": ": DeviceMesh = None"}, {"name": "_ulysses_mesh", "val": ": DeviceMesh = None"}, {"name": "_ring_local_rank", "val": ": int = None"}, {"name": "_ulysses_local_rank", "val": ": int = None"}]</parameters><paramsdesc>- **ring_degree** (`int`, *optional*, defaults to `1`) --
  Number of devices to use for ring attention within a context parallel region. Must be a divisor of the
  total number of devices in the context parallel mesh.
- **ulysses_degree** (`int`, *optional*, defaults to `1`) --
  Number of devices to use for ulysses attention within a context parallel region. Must be a divisor of the
  total number of devices in the context parallel mesh.
- **convert_to_fp32** (`bool`, *optional*, defaults to `True`) --
  Whether to convert output and LSE to float32 for ring attention numerical stability.
- **rotate_method** (`str`, *optional*, defaults to `"allgather"`) --
  Method to use for rotating key/value states across devices in ring attention. Currently, only `"allgather"`
  is supported.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for context parallelism.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.hooks.apply_context_parallel</name><anchor>diffusers.hooks.apply_context_parallel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/context_parallel.py#L78</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "parallel_config", "val": ": ContextParallelConfig"}, {"name": "plan", "val": ": typing.Dict[str, typing.Dict[str, typing.Union[typing.Dict[typing.Union[str, int], typing.Union[diffusers.models._modeling_parallel.ContextParallelInput, typing.List[diffusers.models._modeling_parallel.ContextParallelInput], typing.Tuple[diffusers.models._modeling_parallel.ContextParallelInput, ...]]], diffusers.models._modeling_parallel.ContextParallelOutput, typing.List[diffusers.models._modeling_parallel.ContextParallelOutput], typing.Tuple[diffusers.models._modeling_parallel.ContextParallelOutput, ...]]]]"}]</parameters></docstring>
Apply context parallel on a model.

</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/parallel.md" />

### Activation functions
https://huggingface.co/docs/diffusers/main/api/activations.md

# Activation functions

Customized activation functions for supporting various models in 🤗 Diffusers.

## GELU[[diffusers.models.activations.GELU]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.GELU</name><anchor>diffusers.models.activations.GELU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L65</source><parameters>[{"name": "dim_in", "val": ": int"}, {"name": "dim_out", "val": ": int"}, {"name": "approximate", "val": ": str = 'none'"}, {"name": "bias", "val": ": bool = True"}]</parameters><paramsdesc>- **dim_in** (`int`) -- The number of channels in the input.
- **dim_out** (`int`) -- The number of channels in the output.
- **approximate** (`str`, *optional*, defaults to `"none"`) -- If `"tanh"`, use tanh approximation.
- **bias** (`bool`, defaults to True) -- Whether to use a bias in the linear layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

GELU activation function with tanh approximation support with `approximate="tanh"`.




</div>

## GEGLU[[diffusers.models.activations.GEGLU]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.GEGLU</name><anchor>diffusers.models.activations.GEGLU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L93</source><parameters>[{"name": "dim_in", "val": ": int"}, {"name": "dim_out", "val": ": int"}, {"name": "bias", "val": ": bool = True"}]</parameters><paramsdesc>- **dim_in** (`int`) -- The number of channels in the input.
- **dim_out** (`int`) -- The number of channels in the output.
- **bias** (`bool`, defaults to True) -- Whether to use a bias in the linear layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

A [variant](https://huggingface.co/papers/2002.05202) of the gated linear unit activation function.




</div>

## ApproximateGELU[[diffusers.models.activations.ApproximateGELU]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.ApproximateGELU</name><anchor>diffusers.models.activations.ApproximateGELU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L149</source><parameters>[{"name": "dim_in", "val": ": int"}, {"name": "dim_out", "val": ": int"}, {"name": "bias", "val": ": bool = True"}]</parameters><paramsdesc>- **dim_in** (`int`) -- The number of channels in the input.
- **dim_out** (`int`) -- The number of channels in the output.
- **bias** (`bool`, defaults to True) -- Whether to use a bias in the linear layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this
[paper](https://huggingface.co/papers/1606.08415).




</div>

## SwiGLU[[diffusers.models.activations.SwiGLU]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.SwiGLU</name><anchor>diffusers.models.activations.SwiGLU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L126</source><parameters>[{"name": "dim_in", "val": ": int"}, {"name": "dim_out", "val": ": int"}, {"name": "bias", "val": ": bool = True"}]</parameters><paramsdesc>- **dim_in** (`int`) -- The number of channels in the input.
- **dim_out** (`int`) -- The number of channels in the output.
- **bias** (`bool`, defaults to True) -- Whether to use a bias in the linear layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

A [variant](https://huggingface.co/papers/2002.05202) of the gated linear unit activation function. It's similar to
`GEGLU` but uses SiLU / Swish instead of GeLU.




</div>

## FP32SiLU[[diffusers.models.activations.FP32SiLU]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.FP32SiLU</name><anchor>diffusers.models.activations.FP32SiLU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L53</source><parameters>[]</parameters></docstring>

SiLU activation function with input upcasted to torch.float32.


</div>

## LinearActivation[[diffusers.models.activations.LinearActivation]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.activations.LinearActivation</name><anchor>diffusers.models.activations.LinearActivation</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/activations.py#L169</source><parameters>[{"name": "dim_in", "val": ": int"}, {"name": "dim_out", "val": ": int"}, {"name": "bias", "val": ": bool = True"}, {"name": "activation", "val": ": str = 'silu'"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/activations.md" />

### VAE Image Processor
https://huggingface.co/docs/diffusers/main/api/image_processor.md

# VAE Image Processor

The `VaeImageProcessor` provides a unified API for [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)s to prepare image inputs for VAE encoding and post-processing outputs once they're decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.

All pipelines with `VaeImageProcessor` accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the `output_type` argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the `output_type` argument (for example `output_type="latent"`). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.

## VaeImageProcessor[[diffusers.image_processor.VaeImageProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.VaeImageProcessor</name><anchor>diffusers.image_processor.VaeImageProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L88</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "vae_latent_channels", "val": ": int = 4"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "reducing_gap", "val": ": int = None"}, {"name": "do_normalize", "val": ": bool = True"}, {"name": "do_binarize", "val": ": bool = False"}, {"name": "do_convert_rgb", "val": ": bool = False"}, {"name": "do_convert_grayscale", "val": ": bool = False"}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. Can accept
  `height` and `width` arguments from [image_processor.VaeImageProcessor.preprocess()](/docs/diffusers/main/en/api/image_processor#diffusers.image_processor.VaeImageProcessor.preprocess) method.
- **vae_scale_factor** (`int`, *optional*, defaults to `8`) --
  VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- **resample** (`str`, *optional*, defaults to `lanczos`) --
  Resampling filter to use when resizing the image.
- **do_normalize** (`bool`, *optional*, defaults to `True`) --
  Whether to normalize the image to [-1,1].
- **do_binarize** (`bool`, *optional*, defaults to `False`) --
  Whether to binarize the image to 0/1.
- **do_convert_rgb** (`bool`, *optional*, defaults to be `False`) --
  Whether to convert the images to RGB format.
- **do_convert_grayscale** (`bool`, *optional*, defaults to be `False`) --
  Whether to convert the images to grayscale format.</paramsdesc><paramgroups>0</paramgroups></docstring>

Image processor for VAE.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>apply_overlay</name><anchor>diffusers.image_processor.VaeImageProcessor.apply_overlay</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L794</source><parameters>[{"name": "mask", "val": ": Image"}, {"name": "init_image", "val": ": Image"}, {"name": "image", "val": ": Image"}, {"name": "crop_coords", "val": ": typing.Optional[typing.Tuple[int, int, int, int]] = None"}]</parameters><paramsdesc>- **mask** (`PIL.Image.Image`) --
  The mask image that highlights regions to overlay.
- **init_image** (`PIL.Image.Image`) --
  The original image to which the overlay is applied.
- **image** (`PIL.Image.Image`) --
  The image to overlay onto the original.
- **crop_coords** (`Tuple[int, int, int, int]`, *optional*) --
  Coordinates to crop the image. If provided, the image will be cropped accordingly.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>The final image with the overlay applied.</retdesc></docstring>

Applies an overlay of the mask and the inpainted image on the original image.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>binarize</name><anchor>diffusers.image_processor.VaeImageProcessor.binarize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L529</source><parameters>[{"name": "image", "val": ": Image"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`) --
  The image input, should be a PIL image.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1.</retdesc></docstring>

Create a mask.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>blur</name><anchor>diffusers.image_processor.VaeImageProcessor.blur</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L276</source><parameters>[{"name": "image", "val": ": Image"}, {"name": "blur_factor", "val": ": int = 4"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`) --
  The PIL image to convert to grayscale.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>The grayscale-converted PIL image.</retdesc></docstring>

Applies Gaussian blur to an image.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_to_grayscale</name><anchor>diffusers.image_processor.VaeImageProcessor.convert_to_grayscale</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L259</source><parameters>[{"name": "image", "val": ": Image"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`) --
  The input image to convert.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>The image converted to grayscale.</retdesc></docstring>

Converts a given PIL image to grayscale.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_to_rgb</name><anchor>diffusers.image_processor.VaeImageProcessor.convert_to_rgb</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L242</source><parameters>[{"name": "image", "val": ": Image"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`) --
  The PIL image to convert to RGB.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>The RGB-converted PIL image.</retdesc></docstring>

Converts a PIL image to RGB format.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>denormalize</name><anchor>diffusers.image_processor.VaeImageProcessor.denormalize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L227</source><parameters>[{"name": "images", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}]</parameters><paramsdesc>- **images** (`np.ndarray` or `torch.Tensor`) --
  The image array to denormalize.</paramsdesc><paramgroups>0</paramgroups><rettype>`np.ndarray` or `torch.Tensor`</rettype><retdesc>The denormalized image array.</retdesc></docstring>

Denormalize an image array to [0,1].








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_crop_region</name><anchor>diffusers.image_processor.VaeImageProcessor.get_crop_region</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L293</source><parameters>[{"name": "mask_image", "val": ": Image"}, {"name": "width", "val": ": int"}, {"name": "height", "val": ": int"}, {"name": "pad", "val": " = 0"}]</parameters><paramsdesc>- **mask_image** (PIL.Image.Image) -- Mask image.
- **width** (int) -- Width of the image to be processed.
- **height** (int) -- Height of the image to be processed.
- **pad** (int, optional) -- Padding to be added to the crop region. Defaults to 0.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple</rettype><retdesc>(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and
matches the original aspect ratio.</retdesc></docstring>

Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect
ratio of the original image; for example, if user drew mask in a 128x32 region, and the dimensions for
processing are 512x512, the region will be expanded to 128x128.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_default_height_width</name><anchor>diffusers.image_processor.VaeImageProcessor.get_default_height_width</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L566</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **image** (`Union[PIL.Image.Image, np.ndarray, torch.Tensor]`) --
  The image input, which can be a PIL image, NumPy array, or PyTorch tensor. If it is a NumPy array, it
  should have shape `[batch, height, width]` or `[batch, height, width, channels]`. If it is a PyTorch
  tensor, it should have shape `[batch, channels, height, width]`.
- **height** (`Optional[int]`, *optional*, defaults to `None`) --
  The height of the preprocessed image. If `None`, the height of the `image` input will be used.
- **width** (`Optional[int]`, *optional*, defaults to `None`) --
  The width of the preprocessed image. If `None`, the width of the `image` input will be used.</paramsdesc><paramgroups>0</paramgroups><rettype>`Tuple[int, int]`</rettype><retdesc>A tuple containing the height and width, both resized to the nearest integer multiple of
`vae_scale_factor`.</retdesc></docstring>

Returns the height and width of the image, downscaled to the next integer multiple of `vae_scale_factor`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>normalize</name><anchor>diffusers.image_processor.VaeImageProcessor.normalize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L212</source><parameters>[{"name": "images", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}]</parameters><paramsdesc>- **images** (`np.ndarray` or `torch.Tensor`) --
  The image array to normalize.</paramsdesc><paramgroups>0</paramgroups><rettype>`np.ndarray` or `torch.Tensor`</rettype><retdesc>The normalized image array.</retdesc></docstring>

Normalize an image array to [-1,1].








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>numpy_to_pil</name><anchor>diffusers.image_processor.VaeImageProcessor.numpy_to_pil</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L133</source><parameters>[{"name": "images", "val": ": ndarray"}]</parameters><paramsdesc>- **images** (`np.ndarray`) --
  The image array to convert to PIL format.</paramsdesc><paramgroups>0</paramgroups><rettype>`List[PIL.Image.Image]`</rettype><retdesc>A list of PIL images.</retdesc></docstring>

Convert a numpy image or a batch of images to a PIL image.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>numpy_to_pt</name><anchor>diffusers.image_processor.VaeImageProcessor.numpy_to_pt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L177</source><parameters>[{"name": "images", "val": ": ndarray"}]</parameters><paramsdesc>- **images** (`np.ndarray`) --
  The NumPy image array to convert to PyTorch format.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A PyTorch tensor representation of the images.</retdesc></docstring>

Convert a NumPy image to a PyTorch tensor.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pil_to_numpy</name><anchor>diffusers.image_processor.VaeImageProcessor.pil_to_numpy</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L157</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image]"}]</parameters><paramsdesc>- **images** (`PIL.Image.Image` or `List[PIL.Image.Image]`) --
  The PIL image or list of images to convert to NumPy format.</paramsdesc><paramgroups>0</paramgroups><rettype>`np.ndarray`</rettype><retdesc>A NumPy array representation of the images.</retdesc></docstring>

Convert a PIL image or a list of PIL images to NumPy arrays.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>postprocess</name><anchor>diffusers.image_processor.VaeImageProcessor.postprocess</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L744</source><parameters>[{"name": "image", "val": ": Tensor"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "do_denormalize", "val": ": typing.Optional[typing.List[bool]] = None"}]</parameters><paramsdesc>- **image** (`torch.Tensor`) --
  The image input, should be a pytorch tensor with shape `B x C x H x W`.
- **output_type** (`str`, *optional*, defaults to `pil`) --
  The output type of the image, can be one of `pil`, `np`, `pt`, `latent`.
- **do_denormalize** (`List[bool]`, *optional*, defaults to `None`) --
  Whether to denormalize the image to [0,1]. If `None`, will use the value of `do_normalize` in the
  `VaeImageProcessor` config.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`</rettype><retdesc>The postprocessed image.</retdesc></docstring>

Postprocess the image output from tensor to `output_type`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess</name><anchor>diffusers.image_processor.VaeImageProcessor.preprocess</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L613</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "resize_mode", "val": ": str = 'default'"}, {"name": "crops_coords", "val": ": typing.Optional[typing.Tuple[int, int, int, int]] = None"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of
  supported formats.
- **height** (`int`, *optional*) --
  The height in preprocessed image. If `None`, will use the `get_default_height_width()` to get default
  height.
- **width** (`int`, *optional*) --
  The width in preprocessed. If `None`, will use get_default_height_width()` to get the default width.
- **resize_mode** (`str`, *optional*, defaults to `default`) --
  The resize mode, can be one of `default` or `fill`. If `default`, will resize the image to fit within
  the specified width and height, and it may not maintaining the original aspect ratio. If `fill`, will
  resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
  center the image within the dimensions, filling empty with data from image. If `crop`, will resize the
  image to fit within the specified width and height, maintaining the aspect ratio, and then center the
  image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
  supported for PIL image input.
- **crops_coords** (`List[Tuple[int, int, int, int]]`, *optional*, defaults to `None`) --
  The crop coordinates for each image in the batch. If `None`, will not crop the image.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The preprocessed image.</retdesc></docstring>

Preprocess the image input.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>pt_to_numpy</name><anchor>diffusers.image_processor.VaeImageProcessor.pt_to_numpy</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L196</source><parameters>[{"name": "images", "val": ": Tensor"}]</parameters><paramsdesc>- **images** (`torch.Tensor`) --
  The PyTorch tensor to convert to NumPy format.</paramsdesc><paramgroups>0</paramgroups><rettype>`np.ndarray`</rettype><retdesc>A NumPy array representation of the images.</retdesc></docstring>

Convert a PyTorch tensor to a NumPy image.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resize</name><anchor>diffusers.image_processor.VaeImageProcessor.resize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L468</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor]"}, {"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}, {"name": "resize_mode", "val": ": str = 'default'"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`) --
  The image input, can be a PIL image, numpy array or pytorch tensor.
- **height** (`int`) --
  The height to resize to.
- **width** (`int`) --
  The width to resize to.
- **resize_mode** (`str`, *optional*, defaults to `default`) --
  The resize mode to use, can be one of `default` or `fill`. If `default`, will resize the image to fit
  within the specified width and height, and it may not maintaining the original aspect ratio. If `fill`,
  will resize the image to fit within the specified width and height, maintaining the aspect ratio, and
  then center the image within the dimensions, filling empty with data from image. If `crop`, will resize
  the image to fit within the specified width and height, maintaining the aspect ratio, and then center
  the image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
  supported for PIL image input.</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`</rettype><retdesc>The resized image.</retdesc></docstring>

Resize image.








</div></div>

## InpaintProcessor[[diffusers.image_processor.InpaintProcessor]]

The `InpaintProcessor` accepts `mask` and `image` inputs and process them together. Optionally, it can accept padding_mask_crop and apply mask overlay.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.InpaintProcessor</name><anchor>diffusers.image_processor.InpaintProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L842</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "vae_latent_channels", "val": ": int = 4"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "reducing_gap", "val": ": int = None"}, {"name": "do_normalize", "val": ": bool = True"}, {"name": "do_binarize", "val": ": bool = False"}, {"name": "do_convert_grayscale", "val": ": bool = False"}, {"name": "mask_do_normalize", "val": ": bool = False"}, {"name": "mask_do_binarize", "val": ": bool = True"}, {"name": "mask_do_convert_grayscale", "val": ": bool = True"}]</parameters></docstring>

Image processor for inpainting image and mask.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>postprocess</name><anchor>diffusers.image_processor.InpaintProcessor.postprocess</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L943</source><parameters>[{"name": "image", "val": ": Tensor"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "original_image", "val": ": typing.Optional[PIL.Image.Image] = None"}, {"name": "original_mask", "val": ": typing.Optional[PIL.Image.Image] = None"}, {"name": "crops_coords", "val": ": typing.Optional[typing.Tuple[int, int, int, int]] = None"}]</parameters></docstring>

Postprocess the image, optionally apply mask overlay


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess</name><anchor>diffusers.image_processor.InpaintProcessor.preprocess</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L887</source><parameters>[{"name": "image", "val": ": Image"}, {"name": "mask", "val": ": Image = None"}, {"name": "height", "val": ": int = None"}, {"name": "width", "val": ": int = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}]</parameters></docstring>

Preprocess the image and mask.


</div></div>

## VaeImageProcessorLDM3D[[diffusers.image_processor.VaeImageProcessorLDM3D]]

The `VaeImageProcessorLDM3D` accepts RGB and depth inputs and returns RGB and depth outputs.

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.VaeImageProcessorLDM3D</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L973</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "do_normalize", "val": ": bool = True"}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`.
- **vae_scale_factor** (`int`, *optional*, defaults to `8`) --
  VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- **resample** (`str`, *optional*, defaults to `lanczos`) --
  Resampling filter to use when resizing the image.
- **do_normalize** (`bool`, *optional*, defaults to `True`) --
  Whether to normalize the image to [-1,1].</paramsdesc><paramgroups>0</paramgroups></docstring>

Image processor for VAE LDM3D.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>depth_pil_to_numpy</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D.depth_pil_to_numpy</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1024</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image]"}]</parameters><paramsdesc>- **images** (`Union[List[PIL.Image.Image], PIL.Image.Image]`) --
  The input image or list of images to be converted.</paramsdesc><paramgroups>0</paramgroups><rettype>`np.ndarray`</rettype><retdesc>A NumPy array of the converted images.</retdesc></docstring>

Convert a PIL image or a list of PIL images to NumPy arrays.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>numpy_to_depth</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D.numpy_to_depth</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1059</source><parameters>[{"name": "images", "val": ": ndarray"}]</parameters><paramsdesc>- **images** (`np.ndarray`) --
  The input NumPy array of depth images, which can be a single image or a batch.</paramsdesc><paramgroups>0</paramgroups><rettype>`List[PIL.Image.Image]`</rettype><retdesc>A list of PIL images converted from the input NumPy depth images.</retdesc></docstring>

Convert a NumPy depth image or a batch of images to a list of PIL images.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>numpy_to_pil</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D.numpy_to_pil</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1000</source><parameters>[{"name": "images", "val": ": ndarray"}]</parameters><paramsdesc>- **images** (`np.ndarray`) --
  The input NumPy array of images, which can be a single image or a batch.</paramsdesc><paramgroups>0</paramgroups><rettype>`List[PIL.Image.Image]`</rettype><retdesc>A list of PIL images converted from the input NumPy array.</retdesc></docstring>

Convert a NumPy image or a batch of images to a list of PIL images.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>preprocess</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D.preprocess</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1137</source><parameters>[{"name": "rgb", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray]"}, {"name": "depth", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "target_res", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **rgb** (`Union[torch.Tensor, PIL.Image.Image, np.ndarray]`) --
  The RGB input image, which can be a single image or a batch.
- **depth** (`Union[torch.Tensor, PIL.Image.Image, np.ndarray]`) --
  The depth input image, which can be a single image or a batch.
- **height** (`Optional[int]`, *optional*, defaults to `None`) --
  The desired height of the processed image. If `None`, defaults to the height of the input image.
- **width** (`Optional[int]`, *optional*, defaults to `None`) --
  The desired width of the processed image. If `None`, defaults to the width of the input image.
- **target_res** (`Optional[int]`, *optional*, defaults to `None`) --
  Target resolution for resizing the images. If specified, overrides height and width.</paramsdesc><paramgroups>0</paramgroups><rettype>`Tuple[torch.Tensor, torch.Tensor]`</rettype><retdesc>A tuple containing the processed RGB and depth images as PyTorch tensors.</retdesc></docstring>

Preprocess the image input. Accepted formats are PIL images, NumPy arrays, or PyTorch tensors.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>rgblike_to_depthmap</name><anchor>diffusers.image_processor.VaeImageProcessorLDM3D.rgblike_to_depthmap</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1044</source><parameters>[{"name": "image", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}]</parameters><paramsdesc>- **image** (`Union[np.ndarray, torch.Tensor]`) --
  The RGB-like depth image to convert.</paramsdesc><paramgroups>0</paramgroups><rettype>`Union[np.ndarray, torch.Tensor]`</rettype><retdesc>The corresponding depth map.</retdesc></docstring>

Convert an RGB-like depth image to a depth map.








</div></div>

## PixArtImageProcessor[[diffusers.image_processor.PixArtImageProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.PixArtImageProcessor</name><anchor>diffusers.image_processor.PixArtImageProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1357</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "do_normalize", "val": ": bool = True"}, {"name": "do_binarize", "val": ": bool = False"}, {"name": "do_convert_grayscale", "val": ": bool = False"}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`. Can accept
  `height` and `width` arguments from [image_processor.VaeImageProcessor.preprocess()](/docs/diffusers/main/en/api/image_processor#diffusers.image_processor.VaeImageProcessor.preprocess) method.
- **vae_scale_factor** (`int`, *optional*, defaults to `8`) --
  VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- **resample** (`str`, *optional*, defaults to `lanczos`) --
  Resampling filter to use when resizing the image.
- **do_normalize** (`bool`, *optional*, defaults to `True`) --
  Whether to normalize the image to [-1,1].
- **do_binarize** (`bool`, *optional*, defaults to `False`) --
  Whether to binarize the image to 0/1.
- **do_convert_rgb** (`bool`, *optional*, defaults to be `False`) --
  Whether to convert the images to RGB format.
- **do_convert_grayscale** (`bool`, *optional*, defaults to be `False`) --
  Whether to convert the images to grayscale format.</paramsdesc><paramgroups>0</paramgroups></docstring>

Image processor for PixArt image resize and crop.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>classify_height_width_bin</name><anchor>diffusers.image_processor.PixArtImageProcessor.classify_height_width_bin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1398</source><parameters>[{"name": "height", "val": ": int"}, {"name": "width", "val": ": int"}, {"name": "ratios", "val": ": dict"}]</parameters><paramsdesc>- **height** (`int`) -- The height of the image.
- **width** (`int`) -- The width of the image.
- **ratios** (`dict`) -- A dictionary where keys are aspect ratios and values are tuples of (height, width).</paramsdesc><paramgroups>0</paramgroups><rettype>`Tuple[int, int]`</rettype><retdesc>The closest binned height and width.</retdesc></docstring>

Returns the binned height and width based on the aspect ratio.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>resize_and_crop_tensor</name><anchor>diffusers.image_processor.PixArtImageProcessor.resize_and_crop_tensor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1416</source><parameters>[{"name": "samples", "val": ": Tensor"}, {"name": "new_width", "val": ": int"}, {"name": "new_height", "val": ": int"}]</parameters><paramsdesc>- **samples** (`torch.Tensor`) --
  A tensor of shape (N, C, H, W) where N is the batch size, C is the number of channels, H is the height,
  and W is the width.
- **new_width** (`int`) -- The desired width of the output images.
- **new_height** (`int`) -- The desired height of the output images.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A tensor containing the resized and cropped images.</retdesc></docstring>

Resizes and crops a tensor of images to the specified dimensions.








</div></div>

## IPAdapterMaskProcessor[[diffusers.image_processor.IPAdapterMaskProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.IPAdapterMaskProcessor</name><anchor>diffusers.image_processor.IPAdapterMaskProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1253</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "do_normalize", "val": ": bool = False"}, {"name": "do_binarize", "val": ": bool = True"}, {"name": "do_convert_grayscale", "val": ": bool = True"}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`.
- **vae_scale_factor** (`int`, *optional*, defaults to `8`) --
  VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- **resample** (`str`, *optional*, defaults to `lanczos`) --
  Resampling filter to use when resizing the image.
- **do_normalize** (`bool`, *optional*, defaults to `False`) --
  Whether to normalize the image to [-1,1].
- **do_binarize** (`bool`, *optional*, defaults to `True`) --
  Whether to binarize the image to 0/1.
- **do_convert_grayscale** (`bool`, *optional*, defaults to be `True`) --
  Whether to convert the images to grayscale format.</paramsdesc><paramgroups>0</paramgroups></docstring>

Image processor for IP Adapter image masks.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>downsample</name><anchor>diffusers.image_processor.IPAdapterMaskProcessor.downsample</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1294</source><parameters>[{"name": "mask", "val": ": Tensor"}, {"name": "batch_size", "val": ": int"}, {"name": "num_queries", "val": ": int"}, {"name": "value_embed_dim", "val": ": int"}]</parameters><paramsdesc>- **mask** (`torch.Tensor`) --
  The input mask tensor generated with `IPAdapterMaskProcessor.preprocess()`.
- **batch_size** (`int`) --
  The batch size.
- **num_queries** (`int`) --
  The number of queries.
- **value_embed_dim** (`int`) --
  The dimensionality of the value embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The downsampled mask tensor.</retdesc></docstring>

Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. If the
aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued.








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/image_processor.md" />

### Utilities
https://huggingface.co/docs/diffusers/main/api/utilities.md

# Utilities

Utility and helper functions for working with 🤗 Diffusers.

## numpy_to_pil[[diffusers.utils.numpy_to_pil]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.numpy_to_pil</name><anchor>diffusers.utils.numpy_to_pil</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/pil_utils.py#L37</source><parameters>[{"name": "images", "val": ""}]</parameters></docstring>

Convert a numpy image or a batch of images to a PIL image.


</div>

## pt_to_pil[[diffusers.utils.pt_to_pil]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.pt_to_pil</name><anchor>diffusers.utils.pt_to_pil</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/pil_utils.py#L27</source><parameters>[{"name": "images", "val": ""}]</parameters></docstring>

Convert a torch image to a PIL image.


</div>

## load_image[[diffusers.utils.load_image]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.load_image</name><anchor>diffusers.utils.load_image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/loading_utils.py#L14</source><parameters>[{"name": "image", "val": ": typing.Union[str, PIL.Image.Image]"}, {"name": "convert_method", "val": ": typing.Optional[typing.Callable[[PIL.Image.Image], PIL.Image.Image]] = None"}]</parameters><paramsdesc>- **image** (`str` or `PIL.Image.Image`) --
  The image to convert to the PIL Image format.
- **convert_method** (Callable[[PIL.Image.Image], PIL.Image.Image], *optional*) --
  A conversion method to apply to the image after loading it. When set to `None` the image will be converted
  "RGB".</paramsdesc><paramgroups>0</paramgroups><rettype>`PIL.Image.Image`</rettype><retdesc>A PIL Image.</retdesc></docstring>

Loads `image` to a PIL Image.








</div>

## load_video[[diffusers.utils.load_video]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.load_video</name><anchor>diffusers.utils.load_video</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/loading_utils.py#L57</source><parameters>[{"name": "video", "val": ": str"}, {"name": "convert_method", "val": ": typing.Optional[typing.Callable[[typing.List[PIL.Image.Image]], typing.List[PIL.Image.Image]]] = None"}]</parameters><paramsdesc>- **video** (`str`) --
  A URL or Path to a video to convert to a list of PIL Image format.
- **convert_method** (Callable[[List[PIL.Image.Image]], List[PIL.Image.Image]], *optional*) --
  A conversion method to apply to the video after loading it. When set to `None` the images will be converted
  to "RGB".</paramsdesc><paramgroups>0</paramgroups><rettype>`List[PIL.Image.Image]`</rettype><retdesc>The video as a list of PIL images.</retdesc></docstring>

Loads `video` to a list of PIL Image.








</div>

## export_to_gif[[diffusers.utils.export_to_gif]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.export_to_gif</name><anchor>diffusers.utils.export_to_gif</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/export_utils.py#L28</source><parameters>[{"name": "image", "val": ": typing.List[PIL.Image.Image]"}, {"name": "output_gif_path", "val": ": str = None"}, {"name": "fps", "val": ": int = 10"}]</parameters></docstring>


</div>

## export_to_video[[diffusers.utils.export_to_video]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.export_to_video</name><anchor>diffusers.utils.export_to_video</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/export_utils.py#L141</source><parameters>[{"name": "video_frames", "val": ": typing.Union[typing.List[numpy.ndarray], typing.List[PIL.Image.Image]]"}, {"name": "output_video_path", "val": ": str = None"}, {"name": "fps", "val": ": int = 10"}, {"name": "quality", "val": ": float = 5.0"}, {"name": "bitrate", "val": ": typing.Optional[int] = None"}, {"name": "macro_block_size", "val": ": typing.Optional[int] = 16"}]</parameters></docstring>

quality:
Video output quality. Default is 5. Uses variable bit rate. Highest quality is 10, lowest is 0. Set to None to
prevent variable bitrate flags to FFMPEG so you can manually specify them using output_params instead.
Specifying a fixed bitrate using `bitrate` disables this parameter.

bitrate:
Set a constant bitrate for the video encoding. Default is None causing `quality` parameter to be used instead.
Better quality videos with smaller file sizes will result from using the `quality` variable bitrate parameter
rather than specifying a fixed bitrate with this parameter.

macro_block_size:
Size constraint for video. Width and height, must be divisible by this number. If not divisible by this number
imageio will tell ffmpeg to scale the image up to the next closest size divisible by this number. Most codecs
are compatible with a macroblock size of 16 (default), some can go smaller (4, 8). To disable this automatic
feature set it to None or 1, however be warned many players can't decode videos that are odd in size and some
codecs will produce poor results or fail. See https://en.wikipedia.org/wiki/Macroblock.


</div>

## make_image_grid[[diffusers.utils.make_image_grid]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.make_image_grid</name><anchor>diffusers.utils.make_image_grid</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/pil_utils.py#L53</source><parameters>[{"name": "images", "val": ": typing.List[PIL.Image.Image]"}, {"name": "rows", "val": ": int"}, {"name": "cols", "val": ": int"}, {"name": "resize", "val": ": int = None"}]</parameters></docstring>

Prepares a single grid of images. Useful for visualization purposes.


</div>

## randn_tensor[[diffusers.utils.torch_utils.randn_tensor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.torch_utils.randn_tensor</name><anchor>diffusers.utils.torch_utils.randn_tensor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/torch_utils.py#L146</source><parameters>[{"name": "shape", "val": ": typing.Union[typing.Tuple, typing.List]"}, {"name": "generator", "val": ": typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None"}, {"name": "device", "val": ": typing.Union[str, ForwardRef('torch.device'), NoneType] = None"}, {"name": "dtype", "val": ": typing.Optional[ForwardRef('torch.dtype')] = None"}, {"name": "layout", "val": ": typing.Optional[ForwardRef('torch.layout')] = None"}]</parameters></docstring>
A helper function to create random tensors on the desired `device` with the desired `dtype`. When
passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor
is always created on the CPU.


</div>

## apply_layerwise_casting[[diffusers.hooks.apply_layerwise_casting]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.hooks.apply_layerwise_casting</name><anchor>diffusers.hooks.apply_layerwise_casting</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/layerwise_casting.py#L101</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "storage_dtype", "val": ": dtype"}, {"name": "compute_dtype", "val": ": dtype"}, {"name": "skip_modules_pattern", "val": ": typing.Union[str, typing.Tuple[str, ...]] = 'auto'"}, {"name": "skip_modules_classes", "val": ": typing.Optional[typing.Tuple[typing.Type[torch.nn.modules.module.Module], ...]] = None"}, {"name": "non_blocking", "val": ": bool = False"}]</parameters><paramsdesc>- **module** (`torch.nn.Module`) --
  The module whose leaf modules will be cast to a high precision dtype for computation, and to a low
  precision dtype for storage.
- **storage_dtype** (`torch.dtype`) --
  The dtype to cast the module to before/after the forward pass for storage.
- **compute_dtype** (`torch.dtype`) --
  The dtype to cast the module to during the forward pass for computation.
- **skip_modules_pattern** (`Tuple[str, ...]`, defaults to `"auto"`) --
  A list of patterns to match the names of the modules to skip during the layerwise casting process. If set
  to `"auto"`, the default patterns are used. If set to `None`, no modules are skipped. If set to `None`
  alongside `skip_modules_classes` being `None`, the layerwise casting is applied directly to the module
  instead of its internal submodules.
- **skip_modules_classes** (`Tuple[Type[torch.nn.Module], ...]`, defaults to `None`) --
  A list of module classes to skip during the layerwise casting process.
- **non_blocking** (`bool`, defaults to `False`) --
  If `True`, the weight casting operations are non-blocking.</paramsdesc><paramgroups>0</paramgroups></docstring>

Applies layerwise casting to a given module. The module expected here is a Diffusers ModelMixin but it can be any
nn.Module using diffusers layers or pytorch primitives.

<ExampleCodeBlock anchor="diffusers.hooks.apply_layerwise_casting.example">

Example:

```python
>>> import torch
>>> from diffusers import CogVideoXTransformer3DModel

>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
...     model_id, subfolder="transformer", torch_dtype=torch.bfloat16
... )

>>> apply_layerwise_casting(
...     transformer,
...     storage_dtype=torch.float8_e4m3fn,
...     compute_dtype=torch.bfloat16,
...     skip_modules_pattern=["patch_embed", "norm", "proj_out"],
...     non_blocking=True,
... )
```

</ExampleCodeBlock>




</div>

## apply_group_offloading[[diffusers.hooks.apply_group_offloading]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.hooks.apply_group_offloading</name><anchor>diffusers.hooks.apply_group_offloading</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/group_offloading.py#L445</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "onload_device", "val": ": typing.Union[str, torch.device]"}, {"name": "offload_device", "val": ": typing.Union[str, torch.device] = device(type='cpu')"}, {"name": "offload_type", "val": ": typing.Union[str, diffusers.hooks.group_offloading.GroupOffloadingType] = 'block_level'"}, {"name": "num_blocks_per_group", "val": ": typing.Optional[int] = None"}, {"name": "non_blocking", "val": ": bool = False"}, {"name": "use_stream", "val": ": bool = False"}, {"name": "record_stream", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": ": bool = False"}, {"name": "offload_to_disk_path", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **module** (`torch.nn.Module`) --
  The module to which group offloading is applied.
- **onload_device** (`torch.device`) --
  The device to which the group of modules are onloaded.
- **offload_device** (`torch.device`, defaults to `torch.device("cpu")`) --
  The device to which the group of modules are offloaded. This should typically be the CPU. Default is CPU.
- **offload_type** (`str` or `GroupOffloadingType`, defaults to "block_level") --
  The type of offloading to be applied. Can be one of "block_level" or "leaf_level". Default is
  "block_level".
- **offload_to_disk_path** (`str`, *optional*, defaults to `None`) --
  The path to the directory where parameters will be offloaded. Setting this option can be useful in limited
  RAM environment settings where a reasonable speed-memory trade-off is desired.
- **num_blocks_per_group** (`int`, *optional*) --
  The number of blocks per group when using offload_type="block_level". This is required when using
  offload_type="block_level".
- **non_blocking** (`bool`, defaults to `False`) --
  If True, offloading and onloading is done with non-blocking data transfer.
- **use_stream** (`bool`, defaults to `False`) --
  If True, offloading and onloading is done asynchronously using a CUDA stream. This can be useful for
  overlapping computation and data transfer.
- **record_stream** (`bool`, defaults to `False`) -- When enabled with `use_stream`, it marks the current tensor
  as having been used by this stream. It is faster at the expense of slightly more memory usage. Refer to the
  [PyTorch official docs](https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html) more
  details.
- **low_cpu_mem_usage** (`bool`, defaults to `False`) --
  If True, the CPU memory usage is minimized by pinning tensors on-the-fly instead of pre-pinning them. This
  option only matters when using streamed CPU offloading (i.e. `use_stream=True`). This can be useful when
  the CPU memory is a bottleneck but may counteract the benefits of using streams.</paramsdesc><paramgroups>0</paramgroups></docstring>

Applies group offloading to the internal layers of a torch.nn.Module. To understand what group offloading is, and
where it is beneficial, we need to first provide some context on how other supported offloading methods work.

Typically, offloading is done at two levels:
- Module-level: In Diffusers, this can be enabled using the `ModelMixin::enable_model_cpu_offload()` method. It
  works by offloading each component of a pipeline to the CPU for storage, and onloading to the accelerator device
  when needed for computation. This method is more memory-efficient than keeping all components on the accelerator,
  but the memory requirements are still quite high. For this method to work, one needs memory equivalent to size of
  the model in runtime dtype + size of largest intermediate activation tensors to be able to complete the forward
  pass.
- Leaf-level: In Diffusers, this can be enabled using the `ModelMixin::enable_sequential_cpu_offload()` method. It
  works by offloading the lowest leaf-level parameters of the computation graph to the CPU for storage, and
  onloading only the leafs to the accelerator device for computation. This uses the lowest amount of accelerator
  memory, but can be slower due to the excessive number of device synchronizations.

Group offloading is a middle ground between the two methods. It works by offloading groups of internal layers,
(either `torch.nn.ModuleList` or `torch.nn.Sequential`). This method uses lower memory than module-level
offloading. It is also faster than leaf-level/sequential offloading, as the number of device synchronizations is
reduced.

Another supported feature (for CUDA devices with support for asynchronous data transfer streams) is the ability to
overlap data transfer and computation to reduce the overall execution time compared to sequential offloading. This
is enabled using layer prefetching with streams, i.e., the layer that is to be executed next starts onloading to
the accelerator device while the current layer is being executed - this increases the memory requirements slightly.
Note that this implementation also supports leaf-level offloading but can be made much faster when using streams.



<ExampleCodeBlock anchor="diffusers.hooks.apply_group_offloading.example">

Example:
```python
>>> from diffusers import CogVideoXTransformer3DModel
>>> from diffusers.hooks import apply_group_offloading

>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
...     "THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16
... )

>>> apply_group_offloading(
...     transformer,
...     onload_device=torch.device("cuda"),
...     offload_device=torch.device("cpu"),
...     offload_type="block_level",
...     num_blocks_per_group=2,
...     use_stream=True,
... )
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/utilities.md" />

### Outputs
https://huggingface.co/docs/diffusers/main/api/outputs.md

# Outputs

All model outputs are subclasses of [BaseOutput](/docs/diffusers/main/en/api/outputs#diffusers.utils.BaseOutput), data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries.

For example:

```python
from diffusers import DDIMPipeline

pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
outputs = pipeline()
```

The `outputs` object is a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) which means it has an image attribute.

You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get `None`:

```python
outputs.images
outputs["images"]
```

When considering the `outputs` object as a tuple, it only considers the attributes that don't have `None` values.
For instance, retrieving an image by indexing into it returns the tuple `(outputs.images)`:

```python
outputs[:1]
```

> [!TIP]
> To check a specific pipeline or model output, refer to its corresponding API documentation.

## BaseOutput[[diffusers.utils.BaseOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.utils.BaseOutput</name><anchor>diffusers.utils.BaseOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/outputs.py#L40</source><parameters>""</parameters></docstring>

Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a
tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular
Python dictionary.

> [!WARNING] > You can't unpack a `BaseOutput` directly. Use the [to_tuple()](/docs/diffusers/main/en/api/outputs#diffusers.utils.BaseOutput.to_tuple) method to convert
it to a tuple > first.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_tuple</name><anchor>diffusers.utils.BaseOutput.to_tuple</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/outputs.py#L130</source><parameters>[]</parameters></docstring>

Convert self to a tuple containing all the attributes/keys that are not `None`.


</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

## AudioPipelineOutput[[diffusers.AudioPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioPipelineOutput</name><anchor>diffusers.AudioPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L132</source><parameters>[{"name": "audios", "val": ": ndarray"}]</parameters><paramsdesc>- **audios** (`np.ndarray`) --
  List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for audio pipelines.




</div>

## ImageTextPipelineOutput[[diffusers.ImageTextPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImageTextPipelineOutput</name><anchor>diffusers.ImageTextPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L48</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType]"}, {"name": "text", "val": ": typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **text** (`List[str]` or `List[List[str]]`) --
  List of generated text strings of length `batch_size` or a list of list of strings whose outer list has
  length `batch_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for joint image-text pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/outputs.md" />

### Attention Processor
https://huggingface.co/docs/diffusers/main/api/attnprocessor.md

# Attention Processor

An attention processor is a class for applying different types of attention mechanisms.

## AttnProcessor[[diffusers.models.attention_processor.AttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AttnProcessor</name><anchor>diffusers.models.attention_processor.AttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1101</source><parameters>[]</parameters></docstring>

Default processor for performing attention-related computations.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AttnProcessor2_0</name><anchor>diffusers.models.attention_processor.AttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2694</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0).


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AttnAddedKVProcessor</name><anchor>diffusers.models.attention_processor.AttnAddedKVProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1277</source><parameters>[]</parameters></docstring>

Processor for performing attention-related computations with extra learnable key and value matrices for the text
encoder.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AttnAddedKVProcessor2_0</name><anchor>diffusers.models.attention_processor.AttnAddedKVProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1344</source><parameters>[]</parameters></docstring>

Processor for performing scaled dot-product attention (enabled by default if you're using PyTorch 2.0), with extra
learnable key and value matrices for the text encoder.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AttnProcessorNPU</name><anchor>diffusers.models.attention_processor.AttnProcessorNPU</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2580</source><parameters>[]</parameters></docstring>

Processor for implementing flash attention using torch_npu. Torch_npu supports only fp16 and bf16 data types. If
fp32 is used, F.scaled_dot_product_attention will be used for computation, but the acceleration effect on NPU is
not significant.



</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3666</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). It uses
fused projection layers. For self-attention modules, all projection matrices (i.e., query, key, value) are fused.
For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is currently 🧪 experimental in nature and can change in future.


</div>

## Allegro[[diffusers.models.attention_processor.AllegroAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AllegroAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.AllegroAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1991</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the Allegro model. It applies a normalization layer and rotary embedding on the query and key vector.


</div>

## AuraFlow[[diffusers.models.attention_processor.AuraFlowAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.AuraFlowAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.AuraFlowAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2085</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing Aura Flow.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2178</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing Aura Flow with fused projections.

</div>

## CogVideoX[[diffusers.models.attention_processor.CogVideoXAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.CogVideoXAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.CogVideoXAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2275</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on
query and key vectors, but does not include spatial normalization.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2344</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on
query and key vectors, but does not include spatial normalization.


</div>

## CrossFrameAttnProcessor[[diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor</name><anchor>diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L62</source><parameters>[{"name": "batch_size", "val": " = 2"}]</parameters><paramsdesc>- **batch_size** -- The number that represents actual batch size, other than the frames.
  For example, calling unet with a single prompt and num_images_per_prompt=1, batch_size should be equal to
  2, due to classifier-free guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Cross frame attention processor. Each frame attends the first frame.




</div>

## Custom Diffusion[[diffusers.models.attention_processor.CustomDiffusionAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.CustomDiffusionAttnProcessor</name><anchor>diffusers.models.attention_processor.CustomDiffusionAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1173</source><parameters>[{"name": "train_kv", "val": ": bool = True"}, {"name": "train_q_out", "val": ": bool = True"}, {"name": "hidden_size", "val": ": typing.Optional[int] = None"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "out_bias", "val": ": bool = True"}, {"name": "dropout", "val": ": float = 0.0"}]</parameters><paramsdesc>- **train_kv** (`bool`, defaults to `True`) --
  Whether to newly train the key and value matrices corresponding to the text features.
- **train_q_out** (`bool`, defaults to `True`) --
  Whether to newly train query matrices corresponding to the latent image features.
- **hidden_size** (`int`, *optional*, defaults to `None`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`, *optional*, defaults to `None`) --
  The number of channels in the `encoder_hidden_states`.
- **out_bias** (`bool`, defaults to `True`) --
  Whether to include the bias parameter in `train_q_out`.
- **dropout** (`float`, *optional*, defaults to 0.0) --
  The dropout probability to use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing attention for the Custom Diffusion method.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3884</source><parameters>[{"name": "train_kv", "val": ": bool = True"}, {"name": "train_q_out", "val": ": bool = True"}, {"name": "hidden_size", "val": ": typing.Optional[int] = None"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "out_bias", "val": ": bool = True"}, {"name": "dropout", "val": ": float = 0.0"}]</parameters><paramsdesc>- **train_kv** (`bool`, defaults to `True`) --
  Whether to newly train the key and value matrices corresponding to the text features.
- **train_q_out** (`bool`, defaults to `True`) --
  Whether to newly train query matrices corresponding to the latent image features.
- **hidden_size** (`int`, *optional*, defaults to `None`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`, *optional*, defaults to `None`) --
  The number of channels in the `encoder_hidden_states`.
- **out_bias** (`bool`, defaults to `True`) --
  Whether to include the bias parameter in `train_q_out`.
- **dropout** (`float`, *optional*, defaults to 0.0) --
  The dropout probability to use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled
dot-product attention.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor</name><anchor>diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3768</source><parameters>[{"name": "train_kv", "val": ": bool = True"}, {"name": "train_q_out", "val": ": bool = False"}, {"name": "hidden_size", "val": ": typing.Optional[int] = None"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "out_bias", "val": ": bool = True"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **train_kv** (`bool`, defaults to `True`) --
  Whether to newly train the key and value matrices corresponding to the text features.
- **train_q_out** (`bool`, defaults to `True`) --
  Whether to newly train query matrices corresponding to the latent image features.
- **hidden_size** (`int`, *optional*, defaults to `None`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`, *optional*, defaults to `None`) --
  The number of channels in the `encoder_hidden_states`.
- **out_bias** (`bool`, defaults to `True`) --
  Whether to include the bias parameter in `train_q_out`.
- **dropout** (`float`, *optional*, defaults to 0.0) --
  The dropout probability to use.
- **attention_op** (`Callable`, *optional*, defaults to `None`) --
  The base
  [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to use
  as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best operator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.




</div>

## Flux[[diffusers.models.attention_processor.FluxAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FluxAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FluxAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5503</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedFluxAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedFluxAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5527</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FluxSingleAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FluxSingleAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5513</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0).


</div>

## Hunyuan[[diffusers.models.attention_processor.HunyuanAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.HunyuanAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.HunyuanAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3122</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3220</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0) with fused
projection layers. This is used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on
query and key vector.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3323</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This
variant of the processor employs [Pertubed Attention Guidance](https://huggingface.co/papers/2403.17377).


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3446</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This
variant of the processor employs [Pertubed Attention Guidance](https://huggingface.co/papers/2403.17377).


</div>

## IdentitySelfAttnProcessor2_0[[diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5041</source><parameters>[]</parameters></docstring>

Processor for implementing PAG using scaled dot-product attention (enabled by default if you're using PyTorch 2.0).
PAG reference: https://huggingface.co/papers/2403.17377


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5140</source><parameters>[]</parameters></docstring>

Processor for implementing PAG using scaled dot-product attention (enabled by default if you're using PyTorch 2.0).
PAG reference: https://huggingface.co/papers/2403.17377


</div>

## IP-Adapter[[diffusers.models.attention_processor.IPAdapterAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.IPAdapterAttnProcessor</name><anchor>diffusers.models.attention_processor.IPAdapterAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L4206</source><parameters>[{"name": "hidden_size", "val": ""}, {"name": "cross_attention_dim", "val": " = None"}, {"name": "num_tokens", "val": " = (4,)"}, {"name": "scale", "val": " = 1.0"}]</parameters><paramsdesc>- **hidden_size** (`int`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`) --
  The number of channels in the `encoder_hidden_states`.
- **num_tokens** (`int`, `Tuple[int]` or `List[int]`, defaults to `(4,)`) --
  The context length of the image features.
- **scale** (`float` or List`float`, defaults to 1.0) --
  the weight scale of image prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Attention processor for Multiple IP-Adapters.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.IPAdapterAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.IPAdapterAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L4406</source><parameters>[{"name": "hidden_size", "val": ""}, {"name": "cross_attention_dim", "val": " = None"}, {"name": "num_tokens", "val": " = (4,)"}, {"name": "scale", "val": " = 1.0"}]</parameters><paramsdesc>- **hidden_size** (`int`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`) --
  The number of channels in the `encoder_hidden_states`.
- **num_tokens** (`int`, `Tuple[int]` or `List[int]`, defaults to `(4,)`) --
  The context length of the image features.
- **scale** (`float` or `List[float]`, defaults to 1.0) --
  the weight scale of image prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Attention processor for IP-Adapter for PyTorch 2.0.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L4870</source><parameters>[{"name": "hidden_size", "val": ": int"}, {"name": "ip_hidden_states_dim", "val": ": int"}, {"name": "head_dim", "val": ": int"}, {"name": "timesteps_emb_dim", "val": ": int = 1280"}, {"name": "scale", "val": ": float = 0.5"}]</parameters><paramsdesc>- **hidden_size** (`int`) --
  The number of hidden channels.
- **ip_hidden_states_dim** (`int`) --
  The image feature dimension.
- **head_dim** (`int`) --
  The number of head channels.
- **timesteps_emb_dim** (`int`, defaults to 1280) --
  The number of input channels for timestep embedding.
- **scale** (`float`, defaults to 0.5) --
  IP-Adapter scale.</paramsdesc><paramgroups>0</paramgroups></docstring>

Attention processor for IP-Adapter used typically in processing the SD3-like self-attention projections, with
additional image-based information and timestep embeddings.




</div>

## JointAttnProcessor2_0[[diffusers.models.attention_processor.JointAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.JointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.JointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1420</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing the SD3-like self-attention projections.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGJointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGJointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1506</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing the SD3-like self-attention projections.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1662</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing the SD3-like self-attention projections.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FusedJointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FusedJointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1827</source><parameters>[]</parameters></docstring>
Attention processor used typically in processing the SD3-like self-attention projections.

</div>

## LoRA[[diffusers.models.attention_processor.LoRAAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.LoRAAttnProcessor</name><anchor>diffusers.models.attention_processor.LoRAAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5303</source><parameters>[]</parameters></docstring>

Processor for implementing attention with LoRA.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.LoRAAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.LoRAAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5312</source><parameters>[]</parameters></docstring>

Processor for implementing attention with LoRA (enabled by default if you're using PyTorch 2.0).


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor</name><anchor>diffusers.models.attention_processor.LoRAAttnAddedKVProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5330</source><parameters>[]</parameters></docstring>

Processor for implementing attention with LoRA with extra learnable key and value matrices for the text encoder.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.LoRAXFormersAttnProcessor</name><anchor>diffusers.models.attention_processor.LoRAXFormersAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5321</source><parameters>[]</parameters></docstring>

Processor for implementing attention with LoRA using xFormers.


</div>

## Lumina-T2X[[diffusers.models.attention_processor.LuminaAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.LuminaAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.LuminaAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3570</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the LuminaNextDiT model. It applies a s normalization layer and rotary embedding on query and key vector.


</div>

## Mochi[[diffusers.models.attention_processor.MochiAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.MochiAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.MochiAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L996</source><parameters>[]</parameters></docstring>
Attention processor used in Mochi.

</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.MochiVaeAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.MochiVaeAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2904</source><parameters>[]</parameters></docstring>

Attention processor used in Mochi VAE.


</div>

## Sana[[diffusers.models.attention_processor.SanaLinearAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.SanaLinearAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.SanaLinearAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5339</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product linear attention.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5243</source><parameters>[]</parameters></docstring>

Processor for implementing multiscale quadratic attention.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5391</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product linear attention.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5446</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product linear attention.


</div>

## Stable Audio[[diffusers.models.attention_processor.StableAudioAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.StableAudioAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.StableAudioAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2989</source><parameters>[]</parameters></docstring>

Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
used in the Stable Audio model. It applies rotary embedding on query and key vector, and allows MHA, GQA or MQA.


</div>

## SlicedAttnProcessor[[diffusers.models.attention_processor.SlicedAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.SlicedAttnProcessor</name><anchor>diffusers.models.attention_processor.SlicedAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L3998</source><parameters>[{"name": "slice_size", "val": ": int"}]</parameters><paramsdesc>- **slice_size** (`int`, *optional*) --
  The number of steps to compute attention. Uses as many slices as `attention_head_dim // slice_size`, and
  `attention_head_dim` must be a multiple of the `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing sliced attention.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor</name><anchor>diffusers.models.attention_processor.SlicedAttnAddedKVProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L4085</source><parameters>[{"name": "slice_size", "val": ""}]</parameters><paramsdesc>- **slice_size** (`int`, *optional*) --
  The number of steps to compute attention. Uses as many slices as `attention_head_dim // slice_size`, and
  `attention_head_dim` must be a multiple of the `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.




</div>

## XFormersAttnProcessor[[diffusers.models.attention_processor.XFormersAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.XFormersAttnProcessor</name><anchor>diffusers.models.attention_processor.XFormersAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2486</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*, defaults to `None`) --
  The base
  [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to
  use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best
  operator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing memory efficient attention using xFormers.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.XFormersAttnAddedKVProcessor</name><anchor>diffusers.models.attention_processor.XFormersAttnAddedKVProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2415</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*, defaults to `None`) --
  The base
  [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to
  use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best
  operator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing memory efficient attention using xFormers.




</div>

## XLAFlashAttnProcessor2_0[[diffusers.models.attention_processor.XLAFlashAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.XLAFlashAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.XLAFlashAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L2788</source><parameters>[{"name": "partition_spec", "val": ": typing.Optional[typing.Tuple[typing.Optional[str], ...]] = None"}]</parameters></docstring>

Processor for implementing scaled dot-product attention with pallas flash attention kernel if using `torch_xla`.


</div>

## XFormersJointAttnProcessor[[diffusers.models.attention_processor.XFormersJointAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.XFormersJointAttnProcessor</name><anchor>diffusers.models.attention_processor.XFormersJointAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1906</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*, defaults to `None`) --
  The base
  [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to
  use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best
  operator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Processor for implementing memory efficient attention using xFormers.




</div>

## IPAdapterXFormersAttnProcessor[[diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor</name><anchor>diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L4638</source><parameters>[{"name": "hidden_size", "val": ""}, {"name": "cross_attention_dim", "val": " = None"}, {"name": "num_tokens", "val": " = (4,)"}, {"name": "scale", "val": " = 1.0"}, {"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **hidden_size** (`int`) --
  The hidden size of the attention layer.
- **cross_attention_dim** (`int`) --
  The number of channels in the `encoder_hidden_states`.
- **num_tokens** (`int`, `Tuple[int]` or `List[int]`, defaults to `(4,)`) --
  The context length of the image features.
- **scale** (`float` or `List[float]`, defaults to 1.0) --
  the weight scale of image prompt.
- **attention_op** (`Callable`, *optional*, defaults to `None`) --
  The base
  [operator](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.AttentionOpBase) to
  use as the attention operator. It is recommended to set to `None`, and allow xFormers to choose the best
  operator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Attention processor for IP-Adapter using xFormers.




</div>

## FluxIPAdapterJointAttnProcessor2_0[[diffusers.models.attention_processor.FluxIPAdapterJointAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.FluxIPAdapterJointAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.FluxIPAdapterJointAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5537</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

## XLAFluxFlashAttnProcessor2_0[[diffusers.models.attention_processor.XLAFluxFlashAttnProcessor2_0]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.attention_processor.XLAFluxFlashAttnProcessor2_0</name><anchor>diffusers.models.attention_processor.XLAFluxFlashAttnProcessor2_0</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L5577</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Processor for implementing scaled dot-product attention with pallas flash attention kernel if using `torch_xla`.


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/attnprocessor.md" />

### Video Processor
https://huggingface.co/docs/diffusers/main/api/video_processor.md

# Video Processor

The `VideoProcessor` provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they're decoded. The class inherits `VaeImageProcessor` so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.

## VideoProcessor[[diffusers.video_processor.VideoProcessor.preprocess_video]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.video_processor.VideoProcessor.preprocess_video</name><anchor>diffusers.video_processor.VideoProcessor.preprocess_video</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/video_processor.py#L28</source><parameters>[{"name": "video", "val": ""}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **video** (`List[PIL.Image]`, `List[List[PIL.Image]]`, `torch.Tensor`, `np.array`, `List[torch.Tensor]`, `List[np.array]`) --
  The input video. It can be one of the following:
  * List of the PIL images.
  * List of list of PIL images.
  * 4D Torch tensors (expected shape for each tensor `(num_frames, num_channels, height, width)`).
  * 4D NumPy arrays (expected shape for each array `(num_frames, height, width, num_channels)`).
  * List of 4D Torch tensors (expected shape for each tensor `(num_frames, num_channels, height,
    width)`).
  * List of 4D NumPy arrays (expected shape for each array `(num_frames, height, width, num_channels)`).
  * 5D NumPy arrays: expected shape for each array `(batch_size, num_frames, height, width,
    num_channels)`.
  * 5D Torch tensors: expected shape for each array `(batch_size, num_frames, num_channels, height,
    width)`.
- **height** (`int`, *optional*, defaults to `None`) --
  The height in preprocessed frames of the video. If `None`, will use the `get_default_height_width()` to
  get default height.
- **width** (`int`, *optional*`, defaults to `None`) --
  The width in preprocessed frames of the video. If `None`, will use get_default_height_width()` to get
  the default width.</paramsdesc><paramgroups>0</paramgroups></docstring>

Preprocesses input video(s).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.video_processor.VideoProcessor.postprocess_video</name><anchor>diffusers.video_processor.VideoProcessor.postprocess_video</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/video_processor.py#L89</source><parameters>[{"name": "video", "val": ": Tensor"}, {"name": "output_type", "val": ": str = 'np'"}]</parameters><paramsdesc>- **video** (`torch.Tensor`) -- The video as a tensor.
- **output_type** (`str`, defaults to `"np"`) -- Output type of the postprocessed `video` tensor.</paramsdesc><paramgroups>0</paramgroups></docstring>

Converts a video tensor to a list of frames for export.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/video_processor.md" />

### Caching methods
https://huggingface.co/docs/diffusers/main/api/cache.md

# Caching methods

Cache methods speedup diffusion transformers by storing and reusing intermediate outputs of specific layers, such as attention and feedforward layers, instead of recalculating them at each inference step.

## CacheMixin[[diffusers.CacheMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CacheMixin</name><anchor>diffusers.CacheMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cache_utils.py#L23</source><parameters>[]</parameters></docstring>

A class for enable/disabling caching techniques on diffusion models.

Supported caching techniques:
- [Pyramid Attention Broadcast](https://huggingface.co/papers/2408.12588)
- [FasterCache](https://huggingface.co/papers/2410.19355)
- [FirstBlockCache](https://github.com/chengzeyi/ParaAttention/blob/7a266123671b55e7e5a2fe9af3121f07a36afc78/README.md#first-block-cache-our-dynamic-caching)



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cache_context</name><anchor>diffusers.CacheMixin.cache_context</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cache_utils.py#L120</source><parameters>[{"name": "name", "val": ": str"}]</parameters></docstring>
Context manager that provides additional methods for cache management.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_cache</name><anchor>diffusers.CacheMixin.enable_cache</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cache_utils.py#L39</source><parameters>[{"name": "config", "val": ""}]</parameters><paramsdesc>- **config** (`Union[PyramidAttentionBroadcastConfig]`) --
  The configuration for applying the caching technique. Currently supported caching techniques are:
  - [PyramidAttentionBroadcastConfig](/docs/diffusers/main/en/api/cache#diffusers.PyramidAttentionBroadcastConfig)</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable caching techniques on the model.



<ExampleCodeBlock anchor="diffusers.CacheMixin.enable_cache.example">

Example:

```python
>>> import torch
>>> from diffusers import CogVideoXPipeline, PyramidAttentionBroadcastConfig

>>> pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> config = PyramidAttentionBroadcastConfig(
...     spatial_attention_block_skip_range=2,
...     spatial_attention_timestep_skip_range=(100, 800),
...     current_timestep_callback=lambda: pipe.current_timestep,
... )
>>> pipe.transformer.enable_cache(config)
```

</ExampleCodeBlock>


</div></div>

## PyramidAttentionBroadcastConfig[[diffusers.PyramidAttentionBroadcastConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PyramidAttentionBroadcastConfig</name><anchor>diffusers.PyramidAttentionBroadcastConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/pyramid_attention_broadcast.py#L40</source><parameters>[{"name": "spatial_attention_block_skip_range", "val": ": typing.Optional[int] = None"}, {"name": "temporal_attention_block_skip_range", "val": ": typing.Optional[int] = None"}, {"name": "cross_attention_block_skip_range", "val": ": typing.Optional[int] = None"}, {"name": "spatial_attention_timestep_skip_range", "val": ": typing.Tuple[int, int] = (100, 800)"}, {"name": "temporal_attention_timestep_skip_range", "val": ": typing.Tuple[int, int] = (100, 800)"}, {"name": "cross_attention_timestep_skip_range", "val": ": typing.Tuple[int, int] = (100, 800)"}, {"name": "spatial_attention_block_identifiers", "val": ": typing.Tuple[str, ...] = ('blocks', 'transformer_blocks', 'single_transformer_blocks', 'layers')"}, {"name": "temporal_attention_block_identifiers", "val": ": typing.Tuple[str, ...] = ('temporal_transformer_blocks',)"}, {"name": "cross_attention_block_identifiers", "val": ": typing.Tuple[str, ...] = ('blocks', 'transformer_blocks', 'layers')"}, {"name": "current_timestep_callback", "val": ": typing.Callable[[], int] = None"}]</parameters><paramsdesc>- **spatial_attention_block_skip_range** (`int`, *optional*, defaults to `None`) --
  The number of times a specific spatial attention broadcast is skipped before computing the attention states
  to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times (i.e.,
  old attention states will be reused) before computing the new attention states again.
- **temporal_attention_block_skip_range** (`int`, *optional*, defaults to `None`) --
  The number of times a specific temporal attention broadcast is skipped before computing the attention
  states to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times
  (i.e., old attention states will be reused) before computing the new attention states again.
- **cross_attention_block_skip_range** (`int`, *optional*, defaults to `None`) --
  The number of times a specific cross-attention broadcast is skipped before computing the attention states
  to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times (i.e.,
  old attention states will be reused) before computing the new attention states again.
- **spatial_attention_timestep_skip_range** (`Tuple[int, int]`, defaults to `(100, 800)`) --
  The range of timesteps to skip in the spatial attention layer. The attention computations will be
  conditionally skipped if the current timestep is within the specified range.
- **temporal_attention_timestep_skip_range** (`Tuple[int, int]`, defaults to `(100, 800)`) --
  The range of timesteps to skip in the temporal attention layer. The attention computations will be
  conditionally skipped if the current timestep is within the specified range.
- **cross_attention_timestep_skip_range** (`Tuple[int, int]`, defaults to `(100, 800)`) --
  The range of timesteps to skip in the cross-attention layer. The attention computations will be
  conditionally skipped if the current timestep is within the specified range.
- **spatial_attention_block_identifiers** (`Tuple[str, ...]`) --
  The identifiers to match against the layer names to determine if the layer is a spatial attention layer.
- **temporal_attention_block_identifiers** (`Tuple[str, ...]`) --
  The identifiers to match against the layer names to determine if the layer is a temporal attention layer.
- **cross_attention_block_identifiers** (`Tuple[str, ...]`) --
  The identifiers to match against the layer names to determine if the layer is a cross-attention layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for Pyramid Attention Broadcast.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.apply_pyramid_attention_broadcast</name><anchor>diffusers.apply_pyramid_attention_broadcast</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/pyramid_attention_broadcast.py#L181</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "config", "val": ": PyramidAttentionBroadcastConfig"}]</parameters><paramsdesc>- **module** (`torch.nn.Module`) --
  The module to apply Pyramid Attention Broadcast to.
- **config** (`Optional[PyramidAttentionBroadcastConfig]`, `optional`, defaults to `None`) --
  The configuration to use for Pyramid Attention Broadcast.</paramsdesc><paramgroups>0</paramgroups></docstring>

Apply [Pyramid Attention Broadcast](https://huggingface.co/papers/2408.12588) to a given pipeline.

PAB is an attention approximation method that leverages the similarity in attention states between timesteps to
reduce the computational cost of attention computation. The key takeaway from the paper is that the attention
similarity in the cross-attention layers between timesteps is high, followed by less similarity in the temporal and
spatial layers. This allows for the skipping of attention computation in the cross-attention layers more frequently
than in the temporal and spatial layers. Applying PAB will, therefore, speedup the inference process.



<ExampleCodeBlock anchor="diffusers.apply_pyramid_attention_broadcast.example">

Example:

```python
>>> import torch
>>> from diffusers import CogVideoXPipeline, PyramidAttentionBroadcastConfig, apply_pyramid_attention_broadcast
>>> from diffusers.utils import export_to_video

>>> pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> config = PyramidAttentionBroadcastConfig(
...     spatial_attention_block_skip_range=2,
...     spatial_attention_timestep_skip_range=(100, 800),
...     current_timestep_callback=lambda: pipe.current_timestep,
... )
>>> apply_pyramid_attention_broadcast(pipe.transformer, config)
```

</ExampleCodeBlock>


</div>

## FasterCacheConfig[[diffusers.FasterCacheConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FasterCacheConfig</name><anchor>diffusers.FasterCacheConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/faster_cache.py#L50</source><parameters>[{"name": "spatial_attention_block_skip_range", "val": ": int = 2"}, {"name": "temporal_attention_block_skip_range", "val": ": typing.Optional[int] = None"}, {"name": "spatial_attention_timestep_skip_range", "val": ": typing.Tuple[int, int] = (-1, 681)"}, {"name": "temporal_attention_timestep_skip_range", "val": ": typing.Tuple[int, int] = (-1, 681)"}, {"name": "low_frequency_weight_update_timestep_range", "val": ": typing.Tuple[int, int] = (99, 901)"}, {"name": "high_frequency_weight_update_timestep_range", "val": ": typing.Tuple[int, int] = (-1, 301)"}, {"name": "alpha_low_frequency", "val": ": float = 1.1"}, {"name": "alpha_high_frequency", "val": ": float = 1.1"}, {"name": "unconditional_batch_skip_range", "val": ": int = 5"}, {"name": "unconditional_batch_timestep_skip_range", "val": ": typing.Tuple[int, int] = (-1, 641)"}, {"name": "spatial_attention_block_identifiers", "val": ": typing.Tuple[str, ...] = ('^blocks.*attn', '^transformer_blocks.*attn', '^single_transformer_blocks.*attn')"}, {"name": "temporal_attention_block_identifiers", "val": ": typing.Tuple[str, ...] = ('^temporal_transformer_blocks.*attn',)"}, {"name": "attention_weight_callback", "val": ": typing.Callable[[torch.nn.modules.module.Module], float] = None"}, {"name": "low_frequency_weight_callback", "val": ": typing.Callable[[torch.nn.modules.module.Module], float] = None"}, {"name": "high_frequency_weight_callback", "val": ": typing.Callable[[torch.nn.modules.module.Module], float] = None"}, {"name": "tensor_format", "val": ": str = 'BCFHW'"}, {"name": "is_guidance_distilled", "val": ": bool = False"}, {"name": "current_timestep_callback", "val": ": typing.Callable[[], int] = None"}, {"name": "_unconditional_conditional_input_kwargs_identifiers", "val": ": typing.List[str] = ('hidden_states', 'encoder_hidden_states', 'timestep', 'attention_mask', 'encoder_attention_mask')"}]</parameters><paramsdesc>- **spatial_attention_block_skip_range** (`int`, defaults to `2`) --
  Calculate the attention states every `N` iterations. If this is set to `N`, the attention computation will
  be skipped `N - 1` times (i.e., cached attention states will be reused) before computing the new attention
  states again.
- **temporal_attention_block_skip_range** (`int`, *optional*, defaults to `None`) --
  Calculate the attention states every `N` iterations. If this is set to `N`, the attention computation will
  be skipped `N - 1` times (i.e., cached attention states will be reused) before computing the new attention
  states again.
- **spatial_attention_timestep_skip_range** (`Tuple[float, float]`, defaults to `(-1, 681)`) --
  The timestep range within which the spatial attention computation can be skipped without a significant loss
  in quality. This is to be determined by the user based on the underlying model. The first value in the
  tuple is the lower bound and the second value is the upper bound. Typically, diffusion timesteps for
  denoising are in the reversed range of 0 to 1000 (i.e. denoising starts at timestep 1000 and ends at
  timestep 0). For the default values, this would mean that the spatial attention computation skipping will
  be applicable only after denoising timestep 681 is reached, and continue until the end of the denoising
  process.
- **temporal_attention_timestep_skip_range** (`Tuple[float, float]`, *optional*, defaults to `None`) --
  The timestep range within which the temporal attention computation can be skipped without a significant
  loss in quality. This is to be determined by the user based on the underlying model. The first value in the
  tuple is the lower bound and the second value is the upper bound. Typically, diffusion timesteps for
  denoising are in the reversed range of 0 to 1000 (i.e. denoising starts at timestep 1000 and ends at
  timestep 0).
- **low_frequency_weight_update_timestep_range** (`Tuple[int, int]`, defaults to `(99, 901)`) --
  The timestep range within which the low frequency weight scaling update is applied. The first value in the
  tuple is the lower bound and the second value is the upper bound of the timestep range. The callback
  function for the update is called only within this range.
- **high_frequency_weight_update_timestep_range** (`Tuple[int, int]`, defaults to `(-1, 301)`) --
  The timestep range within which the high frequency weight scaling update is applied. The first value in the
  tuple is the lower bound and the second value is the upper bound of the timestep range. The callback
  function for the update is called only within this range.
- **alpha_low_frequency** (`float`, defaults to `1.1`) --
  The weight to scale the low frequency updates by. This is used to approximate the unconditional branch from
  the conditional branch outputs.
- **alpha_high_frequency** (`float`, defaults to `1.1`) --
  The weight to scale the high frequency updates by. This is used to approximate the unconditional branch
  from the conditional branch outputs.
- **unconditional_batch_skip_range** (`int`, defaults to `5`) --
  Process the unconditional branch every `N` iterations. If this is set to `N`, the unconditional branch
  computation will be skipped `N - 1` times (i.e., cached unconditional branch states will be reused) before
  computing the new unconditional branch states again.
- **unconditional_batch_timestep_skip_range** (`Tuple[float, float]`, defaults to `(-1, 641)`) --
  The timestep range within which the unconditional branch computation can be skipped without a significant
  loss in quality. This is to be determined by the user based on the underlying model. The first value in the
  tuple is the lower bound and the second value is the upper bound.
- **spatial_attention_block_identifiers** (`Tuple[str, ...]`, defaults to `("blocks.*attn1", "transformer_blocks.*attn1", "single_transformer_blocks.*attn1")`) --
  The identifiers to match the spatial attention blocks in the model. If the name of the block contains any
  of these identifiers, FasterCache will be applied to that block. This can either be the full layer names,
  partial layer names, or regex patterns. Matching will always be done using a regex match.
- **temporal_attention_block_identifiers** (`Tuple[str, ...]`, defaults to `("temporal_transformer_blocks.*attn1",)`) --
  The identifiers to match the temporal attention blocks in the model. If the name of the block contains any
  of these identifiers, FasterCache will be applied to that block. This can either be the full layer names,
  partial layer names, or regex patterns. Matching will always be done using a regex match.
- **attention_weight_callback** (`Callable[[torch.nn.Module], float]`, defaults to `None`) --
  The callback function to determine the weight to scale the attention outputs by. This function should take
  the attention module as input and return a float value. This is used to approximate the unconditional
  branch from the conditional branch outputs. If not provided, the default weight is 0.5 for all timesteps.
  Typically, as described in the paper, this weight should gradually increase from 0 to 1 as the inference
  progresses. Users are encouraged to experiment and provide custom weight schedules that take into account
  the number of inference steps and underlying model behaviour as denoising progresses.
- **low_frequency_weight_callback** (`Callable[[torch.nn.Module], float]`, defaults to `None`) --
  The callback function to determine the weight to scale the low frequency updates by. If not provided, the
  default weight is 1.1 for timesteps within the range specified (as described in the paper).
- **high_frequency_weight_callback** (`Callable[[torch.nn.Module], float]`, defaults to `None`) --
  The callback function to determine the weight to scale the high frequency updates by. If not provided, the
  default weight is 1.1 for timesteps within the range specified (as described in the paper).
- **tensor_format** (`str`, defaults to `"BCFHW"`) --
  The format of the input tensors. This should be one of `"BCFHW"`, `"BFCHW"`, or `"BCHW"`. The format is
  used to split individual latent frames in order for low and high frequency components to be computed.
- **is_guidance_distilled** (`bool`, defaults to `False`) --
  Whether the model is guidance distilled or not. If the model is guidance distilled, FasterCache will not be
  applied at the denoiser-level to skip the unconditional branch computation (as there is none).
- **_unconditional_conditional_input_kwargs_identifiers** (`List[str]`, defaults to `("hidden_states", "encoder_hidden_states", "timestep", "attention_mask", "encoder_attention_mask")`) --
  The identifiers to match the input kwargs that contain the batchwise-concatenated unconditional and
  conditional inputs. If the name of the input kwargs contains any of these identifiers, FasterCache will
  split the inputs into unconditional and conditional branches. This must be a list of exact input kwargs
  names that contain the batchwise-concatenated unconditional and conditional inputs.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for [FasterCache](https://huggingface.co/papers/2410.19355).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.apply_faster_cache</name><anchor>diffusers.apply_faster_cache</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/faster_cache.py#L486</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "config", "val": ": FasterCacheConfig"}]</parameters><paramsdesc>- **module** (`torch.nn.Module`) --
  The pytorch module to apply FasterCache to. Typically, this should be a transformer architecture supported
  in Diffusers, such as `CogVideoXTransformer3DModel`, but external implementations may also work.
- **config** (`FasterCacheConfig`) --
  The configuration to use for FasterCache.</paramsdesc><paramgroups>0</paramgroups></docstring>

Applies [FasterCache](https://huggingface.co/papers/2410.19355) to a given pipeline.



<ExampleCodeBlock anchor="diffusers.apply_faster_cache.example">

Example:
```python
>>> import torch
>>> from diffusers import CogVideoXPipeline, FasterCacheConfig, apply_faster_cache

>>> pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> config = FasterCacheConfig(
...     spatial_attention_block_skip_range=2,
...     spatial_attention_timestep_skip_range=(-1, 681),
...     low_frequency_weight_update_timestep_range=(99, 641),
...     high_frequency_weight_update_timestep_range=(-1, 301),
...     spatial_attention_block_identifiers=["transformer_blocks"],
...     attention_weight_callback=lambda _: 0.3,
...     tensor_format="BFCHW",
... )
>>> apply_faster_cache(pipe.transformer, config)
```

</ExampleCodeBlock>


</div>

### FirstBlockCacheConfig[[diffusers.FirstBlockCacheConfig]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FirstBlockCacheConfig</name><anchor>diffusers.FirstBlockCacheConfig</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/first_block_cache.py#L34</source><parameters>[{"name": "threshold", "val": ": float = 0.05"}]</parameters><paramsdesc>- **threshold** (`float`, defaults to `0.05`) --
  The threshold to determine whether or not a forward pass through all layers of the model is required. A
  higher threshold usually results in a forward pass through a lower number of layers and faster inference,
  but might lead to poorer generation quality. A lower threshold may not result in significant generation
  speedup. The threshold is compared against the absmean difference of the residuals between the current and
  cached outputs from the first transformer block. If the difference is below the threshold, the forward pass
  is skipped.</paramsdesc><paramgroups>0</paramgroups></docstring>

Configuration for [First Block
Cache](https://github.com/chengzeyi/ParaAttention/blob/7a266123671b55e7e5a2fe9af3121f07a36afc78/README.md#first-block-cache-our-dynamic-caching).




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.apply_first_block_cache</name><anchor>diffusers.apply_first_block_cache</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/hooks/first_block_cache.py#L194</source><parameters>[{"name": "module", "val": ": Module"}, {"name": "config", "val": ": FirstBlockCacheConfig"}]</parameters><paramsdesc>- **module** (`torch.nn.Module`) --
  The pytorch module to apply FBCache to. Typically, this should be a transformer architecture supported in
  Diffusers, such as `CogVideoXTransformer3DModel`, but external implementations may also work.
- **config** (`FirstBlockCacheConfig`) --
  The configuration to use for applying the FBCache method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Applies [First Block
Cache](https://github.com/chengzeyi/ParaAttention/blob/4de137c5b96416489f06e43e19f2c14a772e28fd/README.md#first-block-cache-our-dynamic-caching)
to a given module.

First Block Cache builds on the ideas of [TeaCache](https://huggingface.co/papers/2411.19108). It is much simpler
to implement generically for a wide range of models and has been integrated first for experimental purposes.



<ExampleCodeBlock anchor="diffusers.apply_first_block_cache.example">

Example:
```python
>>> import torch
>>> from diffusers import CogView4Pipeline
>>> from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig

>>> pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2))

>>> prompt = "A photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, generator=torch.Generator().manual_seed(42)).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/cache.md" />

### Attend-and-Excite
https://huggingface.co/docs/diffusers/main/api/pipelines/attend_and_excite.md

# Attend-and-Excite

Attend-and-Excite for Stable Diffusion was proposed in [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://attendandexcite.github.io/Attend-and-Excite/) and provides textual attention control over image generation.

The abstract from the paper is:

*Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.*

You can find additional information about Attend-and-Excite on the [project page](https://attendandexcite.github.io/Attend-and-Excite/), the [original codebase](https://github.com/AttendAndExcite/Attend-and-Excite), or try it out in a [demo](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionAttendAndExcitePipeline[[diffusers.StableDiffusionAttendAndExcitePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionAttendAndExcitePipeline</name><anchor>diffusers.StableDiffusionAttendAndExcitePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_attend_and_excite/pipeline_stable_diffusion_attend_and_excite.py#L182</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionAttendAndExcitePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_attend_and_excite/pipeline_stable_diffusion_attend_and_excite.py#L749</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "token_indices", "val": ": typing.Union[typing.List[int], typing.List[typing.List[int]]]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "max_iter_to_alter", "val": ": int = 25"}, {"name": "thresholds", "val": ": dict = {0: 0.05, 10: 0.5, 20: 0.8}"}, {"name": "scale_factor", "val": ": int = 20"}, {"name": "attn_res", "val": ": typing.Optional[typing.Tuple[int]] = (16, 16)"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **token_indices** (`List[int]`) --
  The token indices to alter with attend-and-excite.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **max_iter_to_alter** (`int`, *optional*, defaults to `25`) --
  Number of denoising steps to apply attend-and-excite. The `max_iter_to_alter` denoising steps are when
  attend-and-excite is applied. For example, if `max_iter_to_alter` is `25` and there are a total of `30`
  denoising steps, the first `25` denoising steps applies attend-and-excite and the last `5` will not.
- **thresholds** (`dict`, *optional*, defaults to `{0 -- 0.05, 10: 0.5, 20: 0.8}`):
  Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in.
- **scale_factor** (`int`, *optional*, default to 20) --
  Scale factor to control the step size of each attend-and-excite update.
- **attn_res** (`tuple`, *optional*, default computed from width and height) --
  The 2D resolution of the semantic attention map.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionAttendAndExcitePipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionAttendAndExcitePipeline

>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(
...     "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
... ).to("cuda")


>>> prompt = "a cat and a frog"

>>> # use get_indices function to find out indices of the tokens you want to alter
>>> pipe.get_indices(prompt)
{0: '<|startoftext|>', 1: 'a</w>', 2: 'cat</w>', 3: 'and</w>', 4: 'a</w>', 5: 'frog</w>', 6: '<|endoftext|>'}

>>> token_indices = [2, 5]
>>> seed = 6141
>>> generator = torch.Generator("cuda").manual_seed(seed)

>>> images = pipe(
...     prompt=prompt,
...     token_indices=token_indices,
...     guidance_scale=7.5,
...     generator=generator,
...     num_inference_steps=50,
...     max_iter_to_alter=25,
... ).images

>>> image = images[0]
>>> image.save(f"../images/{prompt}_{seed}.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionAttendAndExcitePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_attend_and_excite/pipeline_stable_diffusion_attend_and_excite.py#L296</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_indices</name><anchor>diffusers.StableDiffusionAttendAndExcitePipeline.get_indices</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_attend_and_excite/pipeline_stable_diffusion_attend_and_excite.py#L743</source><parameters>[{"name": "prompt", "val": ": str"}]</parameters></docstring>
Utility function to list the indices of the tokens you wish to alte

</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/attend_and_excite.md" />

### Value-guided planning
https://huggingface.co/docs/diffusers/main/api/pipelines/value_guided_sampling.md

# Value-guided planning

> [!WARNING]
> 🧪 This is an experimental pipeline for reinforcement learning!

This pipeline is based on the [Planning with Diffusion for Flexible Behavior Synthesis](https://huggingface.co/papers/2205.09991) paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine.

The abstract from the paper is:

*Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.*

You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0#scrollTo=6HXJvhyqcITc&uniqifier=1).

The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## ValueGuidedRLPipeline[[diffusers.experimental.ValueGuidedRLPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.experimental.ValueGuidedRLPipeline</name><anchor>diffusers.experimental.ValueGuidedRLPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/experimental/rl/value_guided_sampling.py#L25</source><parameters>[{"name": "value_function", "val": ": UNet1DModel"}, {"name": "unet", "val": ": UNet1DModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "env", "val": ""}]</parameters><paramsdesc>- **value_function** ([UNet1DModel](/docs/diffusers/main/en/api/models/unet#diffusers.UNet1DModel)) --
  A specialized UNet for fine-tuning trajectories base on reward.
- **unet** ([UNet1DModel](/docs/diffusers/main/en/api/models/unet#diffusers.UNet1DModel)) --
  UNet architecture to denoise the encoded trajectories.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
  application is [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler).
- **env** () --
  An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/value_guided_sampling.md" />

### DeepFloyd IF
https://huggingface.co/docs/diffusers/main/api/pipelines/deepfloyd_if.md

# DeepFloyd IF

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

## Overview

DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding.
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules:
- Stage 1: a base model that generates 64x64 px image based on text prompt,
- Stage 2: a 64x64 px => 256x256 px super-resolution model, and
- Stage 3: a 256x256 px => 1024x1024 px super-resolution model
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling.
Stage 3 is [Stability AI's x4 Upscaling model](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler).
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset.
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis.

## Usage

Before you can use IF, you need to accept its usage conditions. To do so:
1. Make sure to have a [Hugging Face account](https://huggingface.co/join) and be logged in.
2. Accept the license on the model card of [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0). Accepting the license on the stage I model card will auto accept for the other IF models.
3. Make sure to login locally. Install `huggingface_hub`:
```sh
pip install huggingface_hub --upgrade
```

run the login function in a Python shell:

```py
from huggingface_hub import login

login()
```

and enter your [Hugging Face Hub access token](https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens).

Next we install `diffusers` and dependencies:

```sh
pip install -q diffusers accelerate transformers
```

The following sections give more in-detail examples of how to use IF. Specifically:

- [Text-to-Image Generation](#text-to-image-generation)
- [Image-to-Image Generation](#text-guided-image-to-image-generation)
- [Inpainting](#text-guided-inpainting-generation)
- [Reusing model weights](#converting-between-different-pipelines)
- [Speed optimization](#optimizing-for-speed)
- [Memory optimization](#optimizing-for-memory)

**Available checkpoints**
- *Stage-1*
  - [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0)
  - [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0)
  - [DeepFloyd/IF-I-M-v1.0](https://huggingface.co/DeepFloyd/IF-I-M-v1.0)

- *Stage-2*
  - [DeepFloyd/IF-II-L-v1.0](https://huggingface.co/DeepFloyd/IF-II-L-v1.0)
  - [DeepFloyd/IF-II-M-v1.0](https://huggingface.co/DeepFloyd/IF-II-M-v1.0)

- *Stage-3*
  - [stabilityai/stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)


**Google Colab**
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/deepfloyd_if_free_tier_google_colab.ipynb)

### Text-to-Image Generation

By default diffusers makes use of [model cpu offloading](../../optimization/memory#model-offloading) to run the whole IF pipeline with as little as 14 GB of VRAM.

```python
from diffusers import DiffusionPipeline
from diffusers.utils import pt_to_pil, make_image_grid
import torch

# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
stage_1_output = stage_1(
    prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")

# stage 2
stage_2_output = stage_2(
    image=stage_1_output,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")

# stage 3
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3)
```

### Text Guided Image-to-Image Generation

The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the [IFImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/deepfloyd_if#diffusers.IFImg2ImgPipeline) and [IFImg2ImgSuperResolutionPipeline](/docs/diffusers/main/en/api/pipelines/deepfloyd_if#diffusers.IFImg2ImgSuperResolutionPipeline) pipelines.

**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the [components](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.components) argument as explained [here](#converting-between-different-pipelines).

```python
from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline
from diffusers.utils import pt_to_pil, load_image, make_image_grid
import torch

# download image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image = original_image.resize((768, 512))

# stage 1
stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = "A fantasy landscape in style minecraft"
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
stage_1_output = stage_1(
    image=original_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")

# stage 2
stage_2_output = stage_2(
    image=stage_1_output,
    original_image=original_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")

# stage 3
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4)
```

### Text Guided Inpainting Generation

The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the [IFInpaintingPipeline](/docs/diffusers/main/en/api/pipelines/deepfloyd_if#diffusers.IFInpaintingPipeline) and [IFInpaintingSuperResolutionPipeline](/docs/diffusers/main/en/api/pipelines/deepfloyd_if#diffusers.IFInpaintingSuperResolutionPipeline) pipelines.

**Note**: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the `~DiffusionPipeline.components()` function as explained [here](#converting-between-different-pipelines).

```python
from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline
from diffusers.utils import pt_to_pil, load_image, make_image_grid
import torch

# download image
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png"
original_image = load_image(url)

# download mask
url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png"
mask_image = load_image(url)

# stage 1
stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()

# stage 2
stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()

# stage 3
safety_modules = {
    "feature_extractor": stage_1.feature_extractor,
    "safety_checker": stage_1.safety_checker,
    "watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()

prompt = "blue sunglasses"
generator = torch.manual_seed(1)

# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)

# stage 1
stage_1_output = stage_1(
    image=original_image,
    mask_image=mask_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")

# stage 2
stage_2_output = stage_2(
    image=stage_1_output,
    original_image=original_image,
    mask_image=mask_image,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    generator=generator,
    output_type="pt",
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png")

# stage 3
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5)
```

### Converting between different pipelines

In addition to being loaded with `from_pretrained`, Pipelines can also be loaded directly from each other.

```python
from diffusers import IFPipeline, IFSuperResolutionPipeline

pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0")
pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0")


from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline

pipe_1 = IFImg2ImgPipeline(**pipe_1.components)
pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components)


from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline

pipe_1 = IFInpaintingPipeline(**pipe_1.components)
pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components)
```

### Optimizing for speed

The simplest optimization to run IF faster is to move all model components to the GPU.

```py
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
pipe.to("cuda")
```

You can also run the diffusion process for a shorter number of timesteps.

This can either be done with the `num_inference_steps` argument:

```py
pipe("<prompt>", num_inference_steps=30)
```

Or with the `timesteps` argument:

```py
from diffusers.pipelines.deepfloyd_if import fast27_timesteps

pipe("<prompt>", timesteps=fast27_timesteps)
```

When doing image variation or inpainting, you can also decrease the number of timesteps
with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process.
A smaller number will vary the image less but run faster.

```py
pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
pipe.to("cuda")

image = pipe(image=image, prompt="<prompt>", strength=0.3).images
```

You can also use [`torch.compile`](../../optimization/fp16#torchcompile). Note that we have not exhaustively tested `torch.compile`
with IF and it might not give expected results.

```py
from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
pipe.to("cuda")

pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True)
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```

### Optimizing for memory

When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs.

Either the model based CPU offloading,

```py
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
```

or the more aggressive layer based CPU offloading.

```py
pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
pipe.enable_sequential_cpu_offload()
```

Additionally, T5 can be loaded in 8bit precision

```py
from transformers import T5EncoderModel

text_encoder = T5EncoderModel.from_pretrained(
    "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit"
)

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-I-XL-v1.0",
    text_encoder=text_encoder,  # pass the previously instantiated 8bit text encoder
    unet=None,
    device_map="auto",
)

prompt_embeds, negative_embeds = pipe.encode_prompt("<prompt>")
```

For CPU RAM constrained machines like Google Colab free tier where we can't load all model components to the CPU at once, we can manually only load the pipeline with
the text encoder or UNet when the respective model components are needed.

```py
from diffusers import IFPipeline, IFSuperResolutionPipeline
import torch
import gc
from transformers import T5EncoderModel
from diffusers.utils import pt_to_pil, make_image_grid

text_encoder = T5EncoderModel.from_pretrained(
    "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit"
)

# text to image
pipe = DiffusionPipeline.from_pretrained(
    "DeepFloyd/IF-I-XL-v1.0",
    text_encoder=text_encoder,  # pass the previously instantiated 8bit text encoder
    unet=None,
    device_map="auto",
)

prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

# Remove the pipeline so we can re-load the pipeline with the unet
del text_encoder
del pipe
gc.collect()
torch.cuda.empty_cache()

pipe = IFPipeline.from_pretrained(
    "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
)

generator = torch.Generator().manual_seed(0)
stage_1_output = pipe(
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    output_type="pt",
    generator=generator,
).images

#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")

# Remove the pipeline so we can load the super-resolution pipeline
del pipe
gc.collect()
torch.cuda.empty_cache()

# First super resolution

pipe = IFSuperResolutionPipeline.from_pretrained(
    "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
)

generator = torch.Generator().manual_seed(0)
stage_2_output = pipe(
    image=stage_1_output,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    output_type="pt",
    generator=generator,
).images

#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2)
```

## Available Pipelines:

| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_if.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py) | *Text-to-Image Generation* | - |
| [pipeline_if_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py) | *Text-to-Image Generation* | - |
| [pipeline_if_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py) | *Image-to-Image Generation* | - |
| [pipeline_if_img2img_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py) | *Image-to-Image Generation* | - |
| [pipeline_if_inpainting.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py) | *Image-to-Image Generation* | - |
| [pipeline_if_inpainting_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py) | *Image-to-Image Generation* | - |

## IFPipeline[[diffusers.IFPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFPipeline</name><anchor>diffusers.IFPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py#L96</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py#L547</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch

>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
>>> pipe.enable_model_cpu_offload()

>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt"
... ).images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> safety_modules = {
...     "feature_extractor": pipe.feature_extractor,
...     "safety_checker": pipe.safety_checker,
...     "watermarker": pipe.watermarker,
... }
>>> super_res_2_pipe = DiffusionPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
... )
>>> super_res_2_pipe.enable_model_cpu_offload()

>>> image = super_res_2_pipe(
...     prompt=prompt,
...     image=image,
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py#L168</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## IFSuperResolutionPipeline[[diffusers.IFSuperResolutionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFSuperResolutionPipeline</name><anchor>diffusers.IFSuperResolutionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py#L82</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "image_noising_scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFSuperResolutionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py#L614</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = None"}, {"name": "width", "val": ": int = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "noise_level", "val": ": int = 250"}, {"name": "clean_caption", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, *optional*, defaults to None) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to None) --
  The width in pixels of the generated image.
- **image** (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`) --
  The image to be upscaled.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*, defaults to None) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **noise_level** (`int`, *optional*, defaults to 250) --
  The amount of noise to add to the upscaled image. Must be in the range `[0, 1000)`
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFSuperResolutionPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch

>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
>>> pipe.enable_model_cpu_offload()

>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFSuperResolutionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py#L302</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## IFImg2ImgPipeline[[diffusers.IFImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFImg2ImgPipeline</name><anchor>diffusers.IFImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py#L120</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py#L661</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "strength", "val": ": float = 0.7"}, {"name": "num_inference_steps", "val": ": int = 80"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 10.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **strength** (`float`, *optional*, defaults to 0.7) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 80) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFImg2ImgPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from io import BytesIO

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> original_image = original_image.resize((768, 512))

>>> pipe = IFImg2ImgPipeline.from_pretrained(
...     "DeepFloyd/IF-I-XL-v1.0",
...     variant="fp16",
...     torch_dtype=torch.float16,
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A fantasy landscape in style minecraft"
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

>>> image = pipe(
...     image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
...     output_type="pt",
... ).images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0",
...     text_encoder=None,
...     variant="fp16",
...     torch_dtype=torch.float16,
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image,
...     original_image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py#L192</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## IFImg2ImgSuperResolutionPipeline[[diffusers.IFImg2ImgSuperResolutionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFImg2ImgSuperResolutionPipeline</name><anchor>diffusers.IFImg2ImgSuperResolutionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py#L124</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "image_noising_scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFImg2ImgSuperResolutionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py#L744</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor]"}, {"name": "original_image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "noise_level", "val": ": int = 250"}, {"name": "clean_caption", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **original_image** (`torch.Tensor` or `PIL.Image.Image`) --
  The original image that `image` was varied from.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **noise_level** (`int`, *optional*, defaults to 250) --
  The amount of noise to add to the upscaled image. Must be in the range `[0, 1000)`
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFImg2ImgSuperResolutionPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from io import BytesIO

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> original_image = original_image.resize((768, 512))

>>> pipe = IFImg2ImgPipeline.from_pretrained(
...     "DeepFloyd/IF-I-XL-v1.0",
...     variant="fp16",
...     torch_dtype=torch.float16,
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A fantasy landscape in style minecraft"
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

>>> image = pipe(
...     image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
...     output_type="pt",
... ).images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0",
...     text_encoder=None,
...     variant="fp16",
...     torch_dtype=torch.float16,
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image,
...     original_image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFImg2ImgSuperResolutionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py#L344</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## IFInpaintingPipeline[[diffusers.IFInpaintingPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFInpaintingPipeline</name><anchor>diffusers.IFInpaintingPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py#L123</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFInpaintingPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py#L753</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFInpaintingPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from io import BytesIO

>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png"
>>> response = requests.get(url)
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> original_image = original_image

>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png"
>>> response = requests.get(url)
>>> mask_image = Image.open(BytesIO(response.content))
>>> mask_image = mask_image

>>> pipe = IFInpaintingPipeline.from_pretrained(
...     "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "blue sunglasses"
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)

>>> image = pipe(
...     image=original_image,
...     mask_image=mask_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
...     output_type="pt",
... ).images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image,
...     mask_image=mask_image,
...     original_image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFInpaintingPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py#L195</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## IFInpaintingSuperResolutionPipeline[[diffusers.IFInpaintingSuperResolutionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IFInpaintingSuperResolutionPipeline</name><anchor>diffusers.IFInpaintingSuperResolutionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py#L126</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "image_noising_scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor]"}, {"name": "watermarker", "val": ": typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.IFInpaintingSuperResolutionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py#L832</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor]"}, {"name": "original_image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "noise_level", "val": ": int = 0"}, {"name": "clean_caption", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **original_image** (`torch.Tensor` or `PIL.Image.Image`) --
  The original image that `image` was varied from.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **noise_level** (`int`, *optional*, defaults to 0) --
  The amount of noise to add to the upscaled image. Must be in the range `[0, 1000)`
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.IFPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.IFPipelineOutput` if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
or watermarked content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.IFInpaintingSuperResolutionPipeline.__call__.example">

Examples:
```py
>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline
>>> from diffusers.utils import pt_to_pil
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from io import BytesIO

>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png"
>>> response = requests.get(url)
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> original_image = original_image

>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png"
>>> response = requests.get(url)
>>> mask_image = Image.open(BytesIO(response.content))
>>> mask_image = mask_image

>>> pipe = IFInpaintingPipeline.from_pretrained(
...     "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "blue sunglasses"

>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)
>>> image = pipe(
...     image=original_image,
...     mask_image=mask_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
...     output_type="pt",
... ).images

>>> # save intermediate image
>>> pil_image = pt_to_pil(image)
>>> pil_image[0].save("./if_stage_I.png")

>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained(
...     "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
... )
>>> super_res_1_pipe.enable_model_cpu_offload()

>>> image = super_res_1_pipe(
...     image=image,
...     mask_image=mask_image,
...     original_image=original_image,
...     prompt_embeds=prompt_embeds,
...     negative_prompt_embeds=negative_embeds,
... ).images
>>> image[0].save("./if_stage_II.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.IFInpaintingSuperResolutionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py#L346</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/deepfloyd_if.md" />

### PixArt-Σ
https://huggingface.co/docs/diffusers/main/api/pipelines/pixart_sigma.md

# PixArt-Σ

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/header_collage_sigma.jpg)

[PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation](https://huggingface.co/papers/2403.04692) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.

The abstract from the paper is:

*In this paper, we introduce PixArt-Σ, a Diffusion Transformer model (DiT) capable of directly generating images at 4K resolution. PixArt-Σ represents a significant advancement over its predecessor, PixArt-α, offering images of markedly higher fidelity and improved alignment with text prompts. A key feature of PixArt-Σ is its training efficiency. Leveraging the foundational pre-training of PixArt-α, it evolves from the ‘weaker’ baseline to a ‘stronger’ model via incorporating higher quality data, a process we term “weak-to-strong training”. The advancements in PixArt-Σ are twofold: (1) High-Quality Training Data: PixArt-Σ incorporates superior-quality image data, paired with more precise and detailed image captions. (2) Efficient Token Compression: we propose a novel attention module within the DiT framework that compresses both keys and values, significantly improving efficiency and facilitating ultra-high-resolution image generation. Thanks to these improvements, PixArt-Σ achieves superior image quality and user prompt adherence capabilities with significantly smaller model size (0.6B parameters) than existing text-to-image diffusion models, such as SDXL (2.6B parameters) and SD Cascade (5.1B parameters). Moreover, PixArt-Σ’s capability to generate 4K images supports the creation of high-resolution posters and wallpapers, efficiently bolstering the production of highquality visual content in industries such as film and gaming.*

You can find the original codebase at [PixArt-alpha/PixArt-sigma](https://github.com/PixArt-alpha/PixArt-sigma) and all the available checkpoints at [PixArt-alpha](https://huggingface.co/PixArt-alpha).

Some notes about this pipeline:

* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](https://hf.co/docs/transformers/model_doc/dit).
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-sigma/blob/master/diffusion/data/datasets/utils.py).
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as PixArt-α, Stable Diffusion XL, Playground V2.0 and DALL-E 3, while being more efficient than them.
* It shows the ability of generating super high resolution images, such as 2048px or even 4K.
* It shows that text-to-image models can grow from a weak model to a stronger one through several improvements (VAEs, datasets, and so on.)

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

> [!TIP]
> You can further improve generation quality by passing the generated image from [PixArtSigmaPipeline](/docs/diffusers/main/en/api/pipelines/pixart_sigma#diffusers.PixArtSigmaPipeline) to the [SDXL refiner](../../using-diffusers/sdxl#base-to-refiner-model) model.

## Inference with under 8GB GPU VRAM

Run the [PixArtSigmaPipeline](/docs/diffusers/main/en/api/pipelines/pixart_sigma#diffusers.PixArtSigmaPipeline) with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let's walk through a full-fledged example.

First, install the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library:

```bash
pip install -U bitsandbytes
```

Then load the text encoder in 8-bit:

```python
from transformers import T5EncoderModel
from diffusers import PixArtSigmaPipeline
import torch

text_encoder = T5EncoderModel.from_pretrained(
    "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
    subfolder="text_encoder",
    load_in_8bit=True,
    device_map="auto",
)
pipe = PixArtSigmaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
    text_encoder=text_encoder,
    transformer=None,
    device_map="balanced"
)
```

Now, use the `pipe` to encode a prompt:

```python
with torch.no_grad():
    prompt = "cute cat"
    prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
```

Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up some GPU VRAM:

```python
import gc

def flush():
    gc.collect()
    torch.cuda.empty_cache()

del text_encoder
del pipe
flush()
```

Then compute the latents with the prompt embeddings as inputs:

```python
pipe = PixArtSigmaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
    text_encoder=None,
    torch_dtype=torch.float16,
).to("cuda")

latents = pipe(
    negative_prompt=None,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    prompt_attention_mask=prompt_attention_mask,
    negative_prompt_attention_mask=negative_prompt_attention_mask,
    num_images_per_prompt=1,
    output_type="latent",
).images

del pipe.transformer
flush()
```

> [!TIP]
> Notice that while initializing `pipe`, you're setting `text_encoder` to `None` so that it's not loaded.

Once the latents are computed, pass it off to the VAE to decode into a real image:

```python
with torch.no_grad():
    image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0]
image = pipe.image_processor.postprocess(image, output_type="pil")[0]
image.save("cat.png")
```

By deleting components you aren't using and flushing the GPU VRAM, you should be able to run [PixArtSigmaPipeline](/docs/diffusers/main/en/api/pipelines/pixart_sigma#diffusers.PixArtSigmaPipeline) with under 8GB GPU VRAM.

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/8bits_cat.png)

If you want a report of your memory-usage, run this [script](https://gist.github.com/sayakpaul/3ae0f847001d342af27018a96f467e4e).

> [!WARNING]
> Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It's recommended to compare the outputs with and without 8-bit.

While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could also specify `load_in_4bit` to bring your memory requirements down even further to under 7GB.

## PixArtSigmaPipeline[[diffusers.PixArtSigmaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PixArtSigmaPipeline</name><anchor>diffusers.PixArtSigmaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_sigma.py#L185</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": PixArtTransformer2DModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. PixArt-Alpha uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([PixArtTransformer2DModel](/docs/diffusers/main/en/api/models/pixart_transformer2d#diffusers.PixArtTransformer2DModel)) --
  A text conditioned `PixArtTransformer2DModel` to denoise the encoded image latents. Initially published as
  [`Transformer2DModel`](https://huggingface.co/PixArt-alpha/PixArt-Sigma-XL-2-1024-MS/blob/main/transformer/config.json#L2)
  in the config, but the mismatch can be ignored.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using PixArt-Sigma.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.PixArtSigmaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_sigma.py#L631</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **max_sequence_length** (`int` defaults to 300) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.PixArtSigmaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import PixArtSigmaPipeline

>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-Sigma-XL-2-512-MS" too.
>>> pipe = PixArtSigmaPipeline.from_pretrained(
...     "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", torch_dtype=torch.float16
... )
>>> # Enable memory optimizations.
>>> # pipe.enable_model_cpu_offload()

>>> prompt = "A small cactus with a happy face in the Sahara desert."
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.PixArtSigmaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_sigma.py#L247</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Alpha, it's should be the embeddings of the ""
  string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/pixart_sigma.md" />

### Latent Consistency Models
https://huggingface.co/docs/diffusers/main/api/pipelines/latent_consistency_models.md

# Latent Consistency Models

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Latent Consistency Models (LCMs) were proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://huggingface.co/papers/2310.04378) by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.

The abstract of the paper is as follows:

*Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: [this https URL](https://latent-consistency-models.github.io/).*

A demo for the [SimianLuo/LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) checkpoint can be found [here](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model).

The pipelines were contributed by [luosiallen](https://luosiallen.github.io/), [nagolinc](https://github.com/nagolinc), and [dg845](https://github.com/dg845).


## LatentConsistencyModelPipeline[[diffusers.LatentConsistencyModelPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LatentConsistencyModelPipeline</name><anchor>diffusers.LatentConsistencyModelPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py#L133</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": LCMScheduler"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Currently only
  supports [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
- **requires_safety_checker** (`bool`, *optional*, defaults to `True`) --
  Whether the pipeline requires a safety checker component.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using a latent consistency model.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LatentConsistencyModelPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py#L640</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 4"}, {"name": "original_inference_steps", "val": ": int = None"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 8.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps use to generate a linearly-spaced timestep schedule, from which
  we will draw `num_inference_steps` evenly spaced timesteps from as our final timestep schedule,
  following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the
  scheduler's `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending
  order.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
  Note that the original latent consistency models paper uses a different CFG formulation where the
  guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when `guidance_scale >
  0`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LatentConsistencyModelPipeline.__call__.example">

Examples:
```py
>>> from diffusers import DiffusionPipeline
>>> import torch

>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32)

>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
>>> num_inference_steps = 4
>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images
>>> images[0].save("image.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_freeu</name><anchor>diffusers.LatentConsistencyModelPipeline.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2233</source><parameters>[{"name": "s1", "val": ": float"}, {"name": "s2", "val": ": float"}, {"name": "b1", "val": ": float"}, {"name": "b2", "val": ": float"}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism as in https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stages where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_freeu</name><anchor>diffusers.LatentConsistencyModelPipeline.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2255</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism if enabled.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.LatentConsistencyModelPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.LatentConsistencyModelPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.LatentConsistencyModelPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2206</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.LatentConsistencyModelPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2220</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LatentConsistencyModelPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py#L226</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.LatentConsistencyModelPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py#L517</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## LatentConsistencyModelImg2ImgPipeline[[diffusers.LatentConsistencyModelImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LatentConsistencyModelImg2ImgPipeline</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py#L154</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": LCMScheduler"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Currently only
  supports [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
- **requires_safety_checker** (`bool`, *optional*, defaults to `True`) --
  Whether the pipeline requires a safety checker component.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using a latent consistency model.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py#L709</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 4"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "original_inference_steps", "val": ": int = None"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 8.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps use to generate a linearly-spaced timestep schedule, from which
  we will draw `num_inference_steps` evenly spaced timesteps from as our final timestep schedule,
  following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the
  scheduler's `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending
  order.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
  Note that the original latent consistency models paper uses a different CFG formulation where the
  guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when `guidance_scale >
  0`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LatentConsistencyModelImg2ImgPipeline.__call__.example">

Examples:
```py
>>> from diffusers import AutoPipelineForImage2Image
>>> import torch
>>> import PIL

>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality.
>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32)

>>> prompt = "High altitude snowy mountains"
>>> image = PIL.Image.open("./snowy_mountains.png")

>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
>>> num_inference_steps = 4
>>> images = pipe(
...     prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0
... ).images

>>> images[0].save("image.png")
```

</ExampleCodeBlock>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_freeu</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2233</source><parameters>[{"name": "s1", "val": ": float"}, {"name": "s2", "val": ": float"}, {"name": "b1", "val": ": float"}, {"name": "b2", "val": ": float"}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism as in https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stages where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_freeu</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2255</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism if enabled.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2206</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2220</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py#L241</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.LatentConsistencyModelImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py#L576</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/latent_consistency_models.md" />

### Dance Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/dance_diffusion.md

# Dance Diffusion

[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is by Zach Evans.

Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by [Harmonai](https://github.com/Harmonai-org).


> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## DanceDiffusionPipeline[[diffusers.DanceDiffusionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DanceDiffusionPipeline</name><anchor>diffusers.DanceDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L37</source><parameters>[{"name": "unet", "val": ": UNet1DModel"}, {"name": "scheduler", "val": ": SchedulerMixin"}]</parameters><paramsdesc>- **unet** ([UNet1DModel](/docs/diffusers/main/en/api/models/unet#diffusers.UNet1DModel)) --
  A `UNet1DModel` to denoise the encoded audio.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded audio latents. Can be one of
  [IPNDMScheduler](/docs/diffusers/main/en/api/schedulers/ipndm#diffusers.IPNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for audio generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.DanceDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L59</source><parameters>[{"name": "batch_size", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **batch_size** (`int`, *optional*, defaults to 1) --
  The number of audio samples to generate.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at
  the expense of slower inference.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **audio_length_in_s** (`float`, *optional*, defaults to `self.unet.config.sample_size/self.unet.config.sample_rate`) --
  The length of the generated audio sample in seconds.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated audio.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.DanceDiffusionPipeline.__call__.example">

Example:

```py
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write

model_id = "harmonai/maestro-150k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")

audios = pipe(audio_length_in_s=4.0).audios

# To save locally
for i, audio in enumerate(audios):
    write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose())

# To display in google colab
import IPython.display as ipd

for audio in audios:
    display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
```

</ExampleCodeBlock>






</div></div>

## AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioPipelineOutput</name><anchor>diffusers.AudioPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L132</source><parameters>[{"name": "audios", "val": ": ndarray"}]</parameters><paramsdesc>- **audios** (`np.ndarray`) --
  List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for audio pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/dance_diffusion.md" />

### Sana Sprint
https://huggingface.co/docs/diffusers/main/api/pipelines/sana_sprint.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# SANA-Sprint

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation](https://huggingface.co/papers/2503.09641) from NVIDIA, MIT HAN Lab, and Hugging Face by Junsong Chen, Shuchen Xue, Yuyang Zhao, Jincheng Yu, Sayak Paul, Junyu Chen, Han Cai, Enze Xie, Song Han

The abstract from the paper is:

*This paper presents SANA-Sprint, an efficient diffusion model for ultra-fast text-to-image (T2I) generation. SANA-Sprint is built on a pre-trained foundation model and augmented with hybrid distillation, dramatically reducing inference steps from 20 to 1-4. We introduce three key innovations: (1) We propose a training-free approach that transforms a pre-trained flow-matching model for continuous-time consistency distillation (sCM), eliminating costly training from scratch and achieving high training efficiency. Our hybrid distillation strategy combines sCM with latent adversarial distillation (LADD): sCM ensures alignment with the teacher model, while LADD enhances single-step generation fidelity. (2) SANA-Sprint is a unified step-adaptive model that achieves high-quality generation in 1-4 steps, eliminating step-specific training and improving efficiency. (3) We integrate ControlNet with SANA-Sprint for real-time interactive image generation, enabling instant visual feedback for user interaction. SANA-Sprint establishes a new Pareto frontier in speed-quality tradeoffs, achieving state-of-the-art performance with 7.59 FID and 0.74 GenEval in only 1 step — outperforming FLUX-schnell (7.94 FID / 0.71 GenEval) while being 10× faster (0.1s vs 1.1s on H100). It also achieves 0.1s (T2I) and 0.25s (ControlNet) latency for 1024×1024 images on H100, and 0.31s (T2I) on an RTX 4090, showcasing its exceptional efficiency and potential for AI-powered consumer applications (AIPC). Code and pre-trained models will be open-sourced.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [lawrence-cj](https://github.com/lawrence-cj), [shuchen Xue](https://github.com/scxue) and [Enze Xie](https://github.com/xieenze). The original codebase can be found [here](https://github.com/NVlabs/Sana). The original weights can be found under [hf.co/Efficient-Large-Model](https://huggingface.co/Efficient-Large-Model/).

Available models:

|                                                                    Model                                                                    | Recommended dtype |
|:-------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------:|
| [`Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers) | `torch.bfloat16`  |
| [`Efficient-Large-Model/Sana_Sprint_0.6B_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_Sprint_0.6B_1024px_diffusers) | `torch.bfloat16`  |

Refer to [this](https://huggingface.co/collections/Efficient-Large-Model/sana-sprint-67d6810d65235085b3b17c76) collection for more information.

Note: The recommended dtype mentioned is for the transformer weights. The text encoder must stay in `torch.bfloat16` and VAE weights must stay in `torch.bfloat16` or `torch.float32` for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype. 


## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [SanaSprintPipeline](/docs/diffusers/main/en/api/pipelines/sana_sprint#diffusers.SanaSprintPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaSprintPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = AutoModel.from_pretrained(
    "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.bfloat16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SanaTransformer2DModel.from_pretrained(
    "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.bfloat16,
)

pipeline = SanaSprintPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.bfloat16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("sana.png")
```

## Setting `max_timesteps`

Users can tweak the `max_timesteps` value for experimenting with the visual quality of the generated outputs. The default `max_timesteps` value was obtained with an inference-time search process. For more details about it, check out the paper.

## Image to Image 

The [SanaSprintImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/sana_sprint#diffusers.SanaSprintImg2ImgPipeline) is a pipeline for image-to-image generation. It takes an input image and a prompt, and generates a new image based on the input image and the prompt.

```py
import torch
from diffusers import SanaSprintImg2ImgPipeline
from diffusers.utils.loading_utils import load_image

image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
)

pipe = SanaSprintImg2ImgPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers", 
    torch_dtype=torch.bfloat16)
pipe.to("cuda")

image = pipe(
    prompt="a cute pink bear", 
    image=image, 
    strength=0.5, 
    height=832, 
    width=480
).images[0]
image.save("output.png")
```

## SanaSprintPipeline[[diffusers.SanaSprintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaSprintPipeline</name><anchor>diffusers.SanaSprintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L141</source><parameters>[{"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "vae", "val": ": AutoencoderDC"}, {"name": "transformer", "val": ": SanaTransformer2DModel"}, {"name": "scheduler", "val": ": DPMSolverMultistepScheduler"}]</parameters></docstring>

Pipeline for text-to-image generation using [SANA-Sprint](https://huggingface.co/papers/2503.09641).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SanaSprintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L615</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 2"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "max_timesteps", "val": ": float = 1.5708"}, {"name": "intermediate_timesteps", "val": ": float = 1.3"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.List[str] = [\"Given a user prompt, generate an 'Enhanced prompt' that provides detailed visual descriptions suitable for image generation. Evaluate the level of detail in the user prompt:\", '- If the prompt is simple, focus on adding specifics about colors, shapes, sizes, textures, and spatial relationships to create vivid and concrete scenes.', '- If the prompt is already detailed, refine and enhance the existing details slightly without overcomplicating.', 'Here are examples of how to transform or refine prompts:', '- User Prompt: A cat sleeping -> Enhanced: A small, fluffy white cat curled up in a round shape, sleeping peacefully on a warm sunny windowsill, surrounded by pots of blooming red flowers.', '- User Prompt: A busy city street -> Enhanced: A bustling city street scene at dusk, featuring glowing street lamps, a diverse crowd of people in colorful clothing, and a double-decker bus passing by towering glass skyscrapers.', 'Please generate only the enhanced description for the prompt below and avoid including any additional commentary or evaluations:', 'User Prompt: ']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **max_timesteps** (`float`, *optional*, defaults to 1.57080) --
  The maximum timestep value used in the SCM scheduler.
- **intermediate_timesteps** (`float`, *optional*, defaults to 1.3) --
  The intermediate timestep value used in SCM scheduler (only used when num_inference_steps=2).
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `300`) --
  Maximum sequence length to use with the `prompt`.
- **complex_human_instruction** (`List[str]`, *optional*) --
  Instructions for complex human attention:
  https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55.</paramsdesc><paramgroups>0</paramgroups><rettype>[SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SanaSprintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import SanaSprintPipeline

>>> pipe = SanaSprintPipeline.from_pretrained(
...     "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers", torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = pipe(prompt="a tiny astronaut hatching from an egg on the moon")[0]
>>> image[0].save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.SanaSprintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L187</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.SanaSprintPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L214</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.SanaSprintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L174</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.SanaSprintPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L200</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SanaSprintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint.py#L286</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded

- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.
- **complex_human_instruction** (`list[str]`, defaults to `complex_human_instruction`) --
  If `complex_human_instruction` is not empty, the function will use the complex Human instruction for
  the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SanaSprintImg2ImgPipeline[[diffusers.SanaSprintImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaSprintImg2ImgPipeline</name><anchor>diffusers.SanaSprintImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L147</source><parameters>[{"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "vae", "val": ": AutoencoderDC"}, {"name": "transformer", "val": ": SanaTransformer2DModel"}, {"name": "scheduler", "val": ": DPMSolverMultistepScheduler"}]</parameters></docstring>

Pipeline for text-to-image generation using [SANA-Sprint](https://huggingface.co/papers/2503.09641).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SanaSprintImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L686</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 2"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "max_timesteps", "val": ": float = 1.5708"}, {"name": "intermediate_timesteps", "val": ": float = 1.3"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.List[str] = [\"Given a user prompt, generate an 'Enhanced prompt' that provides detailed visual descriptions suitable for image generation. Evaluate the level of detail in the user prompt:\", '- If the prompt is simple, focus on adding specifics about colors, shapes, sizes, textures, and spatial relationships to create vivid and concrete scenes.', '- If the prompt is already detailed, refine and enhance the existing details slightly without overcomplicating.', 'Here are examples of how to transform or refine prompts:', '- User Prompt: A cat sleeping -> Enhanced: A small, fluffy white cat curled up in a round shape, sleeping peacefully on a warm sunny windowsill, surrounded by pots of blooming red flowers.', '- User Prompt: A busy city street -> Enhanced: A bustling city street scene at dusk, featuring glowing street lamps, a diverse crowd of people in colorful clothing, and a double-decker bus passing by towering glass skyscrapers.', 'Please generate only the enhanced description for the prompt below and avoid including any additional commentary or evaluations:', 'User Prompt: ']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **max_timesteps** (`float`, *optional*, defaults to 1.57080) --
  The maximum timestep value used in the SCM scheduler.
- **intermediate_timesteps** (`float`, *optional*, defaults to 1.3) --
  The intermediate timestep value used in SCM scheduler (only used when num_inference_steps=2).
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
  [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `300`) --
  Maximum sequence length to use with the `prompt`.
- **complex_human_instruction** (`List[str]`, *optional*) --
  Instructions for complex human attention:
  https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55.</paramsdesc><paramgroups>0</paramgroups><rettype>[SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SanaSprintImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import SanaSprintImg2ImgPipeline
>>> from diffusers.utils.loading_utils import load_image

>>> pipe = SanaSprintImg2ImgPipeline.from_pretrained(
...     "Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers", torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
... )


>>> image = pipe(prompt="a cute pink bear", image=image, strength=0.5, height=832, width=480).images[0]
>>> image[0].save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.SanaSprintImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L196</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.SanaSprintImg2ImgPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L224</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.SanaSprintImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L182</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.SanaSprintImg2ImgPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L210</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SanaSprintImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_sprint_img2img.py#L297</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded

- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.
- **complex_human_instruction** (`list[str]`, defaults to `complex_human_instruction`) --
  If `complex_human_instruction` is not empty, the function will use the complex Human instruction for
  the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SanaPipelineOutput[[diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</name><anchor>diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Sana pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/sana_sprint.md" />

### AuraFlow
https://huggingface.co/docs/diffusers/main/api/pipelines/aura_flow.md

# AuraFlow

AuraFlow is inspired by [Stable Diffusion 3](../pipelines/stable_diffusion/stable_diffusion_3) and is by far the largest text-to-image generation model that comes with an Apache 2.0 license. This model achieves state-of-the-art results on the [GenEval](https://github.com/djghosh13/geneval) benchmark.

It was developed by the Fal team and more details about it can be found in [this blog post](https://blog.fal.ai/auraflow/).

> [!TIP]
> AuraFlow can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details.

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [AuraFlowPipeline](/docs/diffusers/main/en/api/pipelines/aura_flow#diffusers.AuraFlowPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, AuraFlowTransformer2DModel, AuraFlowPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "fal/AuraFlow",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = AuraFlowTransformer2DModel.from_pretrained(
    "fal/AuraFlow",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = AuraFlowPipeline.from_pretrained(
    "fal/AuraFlow",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("auraflow.png")
```

Loading [GGUF checkpoints](https://huggingface.co/docs/diffusers/quantization/gguf) are also supported:

```py
import torch
from diffusers import (
    AuraFlowPipeline,
    GGUFQuantizationConfig,
    AuraFlowTransformer2DModel,
)

transformer = AuraFlowTransformer2DModel.from_single_file(
    "https://huggingface.co/city96/AuraFlow-v0.3-gguf/blob/main/aura_flow_0.3-Q2_K.gguf",
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16,
)

pipeline = AuraFlowPipeline.from_pretrained(
    "fal/AuraFlow-v0.3",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)

prompt = "a cute pony in a field of flowers"
image = pipeline(prompt).images[0]
image.save("auraflow.png")
```

## Support for `torch.compile()`

AuraFlow can be compiled with `torch.compile()` to speed up inference latency even for different resolutions. First, install PyTorch nightly following the instructions from [here](https://pytorch.org/). The snippet below shows the changes needed to enable this:

```diff
+ torch.fx.experimental._config.use_duck_shape = False
+ pipeline.transformer = torch.compile(
    pipeline.transformer, fullgraph=True, dynamic=True
)
```

Specifying `use_duck_shape` to be `False` instructs the compiler if it should use the same symbolic variable to represent input sizes that are the same. For more details, check out [this comment](https://github.com/huggingface/diffusers/pull/11327#discussion_r2047659790).

This enables from 100% (on low resolutions) to a 30% (on 1536x1536 resolution) speed improvements.

Thanks to [AstraliteHeart](https://github.com/huggingface/diffusers/pull/11297/) who helped us rewrite the [AuraFlowTransformer2DModel](/docs/diffusers/main/en/api/models/aura_flow_transformer2d#diffusers.AuraFlowTransformer2DModel) class so that the above works for different resolutions ([PR](https://github.com/huggingface/diffusers/pull/11297/)).

## AuraFlowPipeline[[diffusers.AuraFlowPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AuraFlowPipeline</name><anchor>diffusers.AuraFlowPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py#L123</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": AuraFlowTransformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}]</parameters><paramsdesc>- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. AuraFlow uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [EleutherAI/pile-t5-xl](https://huggingface.co/EleutherAI/pile-t5-xl) variant.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **transformer** ([AuraFlowTransformer2DModel](/docs/diffusers/main/en/api/models/aura_flow_transformer2d#diffusers.AuraFlowTransformer2DModel)) --
  Conditional Transformer (MMDiT and DiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AuraFlowPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py#L438</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = 1024"}, {"name": "width", "val": ": typing.Optional[int] = 1024"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed,
  `num_inference_steps` and `timesteps` must be `None`.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AuraFlowPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AuraFlowPipeline

>>> pipe = AuraFlowPipeline.from_pretrained("fal/AuraFlow", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> image = pipe(prompt).images[0]
>>> image.save("aura_flow.png")
```

</ExampleCodeBlock>


Returns: [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`:
If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AuraFlowPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py#L232</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **max_sequence_length** (`int`, defaults to 256) -- Maximum sequence length to use for the prompt.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/aura_flow.md" />

### AutoPipeline
https://huggingface.co/docs/diffusers/main/api/pipelines/auto_pipeline.md

# AutoPipeline

The `AutoPipeline` is designed to make it easy to load a checkpoint for a task without needing to know the specific pipeline class. Based on the task, the `AutoPipeline` automatically retrieves the correct pipeline class from the checkpoint `model_index.json` file.

> [!TIP]
> Check out the [AutoPipeline](../../tutorials/autopipeline) tutorial to learn how to use this API!

## AutoPipelineForText2Image[[diffusers.AutoPipelineForText2Image]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoPipelineForText2Image</name><anchor>diffusers.AutoPipelineForText2Image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L289</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


[AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image) is a generic pipeline class that instantiates a text-to-image pipeline class. The
specific underlying pipeline class is automatically selected from either the
[from_pretrained()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image.from_pretrained) or [from_pipe()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image.from_pipe) methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.




<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.AutoPipelineForText2Image.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L314</source><parameters>[{"name": "pretrained_model_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights
    saved using
  [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained).
- **torch_dtype** (`torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **custom_revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
  `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
  custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*) --
  A map that specifies where each submodule should go. It doesn’t need to be defined for each
  parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
  same device.

  Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
  more information about each option see [designing a device
  map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if device_map contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the safetensors weights are downloaded if they're available **and** if the
  safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors
  weights. If set to `False`, safetensors weights are not loaded.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.
- **variant** (`str`, *optional*) --
  Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight.

The from_pretrained() method takes care of returning the correct pipeline class instance by:
1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its
   config object
2. Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class
   name.

If a `controlnet` argument is passed, it will instantiate a [StableDiffusionControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline) object.

The pipeline is set in evaluation mode (`model.eval()`) by default.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForText2Image.from_pretrained.example">

If you get the error message below, you need to finetune the weights for your downstream task:

```
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>



> [!TIP] > To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in
with `hf > auth login`.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForText2Image.from_pretrained.example-2">

Examples:

```py
>>> from diffusers import AutoPipelineForText2Image

>>> pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
>>> image = pipeline(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pipe</name><anchor>diffusers.AutoPipelineForText2Image.from_pipe</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L462</source><parameters>[{"name": "pipeline", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pipeline** (`DiffusionPipeline`) --
  an instantiated `DiffusionPipeline` object</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class.

The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image
pipeline linked to the pipeline class using pattern matching on pipeline class name.

All the modules the pipeline contains will be used to initialize the new pipeline without reallocating
additional memory.

The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.AutoPipelineForText2Image.from_pipe.example">

```py
>>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image

>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", requires_safety_checker=False
... )

>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i)
>>> image = pipe_t2i(prompt).images[0]
```

</ExampleCodeBlock>


</div></div>

## AutoPipelineForImage2Image[[diffusers.AutoPipelineForImage2Image]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoPipelineForImage2Image</name><anchor>diffusers.AutoPipelineForImage2Image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L579</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


[AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) is a generic pipeline class that instantiates an image-to-image pipeline class. The
specific underlying pipeline class is automatically selected from either the
[from_pretrained()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image.from_pretrained) or [from_pipe()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image.from_pipe) methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.




<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.AutoPipelineForImage2Image.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L604</source><parameters>[{"name": "pretrained_model_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights
    saved using
  [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained).
- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **custom_revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
  `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
  custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*) --
  A map that specifies where each submodule should go. It doesn’t need to be defined for each
  parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
  same device.

  Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
  more information about each option see [designing a device
  map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if device_map contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the safetensors weights are downloaded if they're available **and** if the
  safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors
  weights. If set to `False`, safetensors weights are not loaded.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.
- **variant** (`str`, *optional*) --
  Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight.

The from_pretrained() method takes care of returning the correct pipeline class instance by:
1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its
   config object
2. Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class
   name.

If a `controlnet` argument is passed, it will instantiate a [StableDiffusionControlNetImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetImg2ImgPipeline)
object.

The pipeline is set in evaluation mode (`model.eval()`) by default.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForImage2Image.from_pretrained.example">

If you get the error message below, you need to finetune the weights for your downstream task:

```
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>



> [!TIP] > To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in
with `hf > auth login`.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForImage2Image.from_pretrained.example-2">

Examples:

```py
>>> from diffusers import AutoPipelineForImage2Image

>>> pipeline = AutoPipelineForImage2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
>>> image = pipeline(prompt, image).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pipe</name><anchor>diffusers.AutoPipelineForImage2Image.from_pipe</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L763</source><parameters>[{"name": "pipeline", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pipeline** (`DiffusionPipeline`) --
  an instantiated `DiffusionPipeline` object</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class.

The from_pipe() method takes care of returning the correct pipeline class instance by finding the
image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name.

All the modules the pipeline contains will be used to initialize the new pipeline without reallocating
additional memory.

The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.AutoPipelineForImage2Image.from_pipe.example">

Examples:

```py
>>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image

>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", requires_safety_checker=False
... )

>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i)
>>> image = pipe_i2i(prompt, image).images[0]
```

</ExampleCodeBlock>


</div></div>

## AutoPipelineForInpainting[[diffusers.AutoPipelineForInpainting]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoPipelineForInpainting</name><anchor>diffusers.AutoPipelineForInpainting</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L886</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


[AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting) is a generic pipeline class that instantiates an inpainting pipeline class. The
specific underlying pipeline class is automatically selected from either the
[from_pretrained()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting.from_pretrained) or [from_pipe()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting.from_pipe) methods.

This class cannot be instantiated using `__init__()` (throws an error).

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.




<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.AutoPipelineForInpainting.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L911</source><parameters>[{"name": "pretrained_model_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights
    saved using
  [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained).
- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **custom_revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
  `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
  custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*) --
  A map that specifies where each submodule should go. It doesn’t need to be defined for each
  parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
  same device.

  Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
  more information about each option see [designing a device
  map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if device_map contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the safetensors weights are downloaded if they're available **and** if the
  safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors
  weights. If set to `False`, safetensors weights are not loaded.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.
- **variant** (`str`, *optional*) --
  Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight.

The from_pretrained() method takes care of returning the correct pipeline class instance by:
1. Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its
   config object
2. Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name.

If a `controlnet` argument is passed, it will instantiate a [StableDiffusionControlNetInpaintPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetInpaintPipeline)
object.

The pipeline is set in evaluation mode (`model.eval()`) by default.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForInpainting.from_pretrained.example">

If you get the error message below, you need to finetune the weights for your downstream task:

```
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>



> [!TIP] > To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in
with `hf > auth login`.

<ExampleCodeBlock anchor="diffusers.AutoPipelineForInpainting.from_pretrained.example-2">

Examples:

```py
>>> from diffusers import AutoPipelineForInpainting

>>> pipeline = AutoPipelineForInpainting.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pipe</name><anchor>diffusers.AutoPipelineForInpainting.from_pipe</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/auto_pipeline.py#L1067</source><parameters>[{"name": "pipeline", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pipeline** (`DiffusionPipeline`) --
  an instantiated `DiffusionPipeline` object</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class.

The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting
pipeline linked to the pipeline class using pattern matching on pipeline class name.

All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating
additional memory.

The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.AutoPipelineForInpainting.from_pipe.example">

Examples:

```py
>>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting

>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained(
...     "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False
... )

>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i)
>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0]
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/auto_pipeline.md" />

### Lumina-T2X
https://huggingface.co/docs/diffusers/main/api/pipelines/lumina.md

# Lumina-T2X
![concepts](https://github.com/Alpha-VLLM/Lumina-T2X/assets/54879512/9f52eabb-07dc-4881-8257-6d8a5f2a0a5a)

[Lumina-Next : Making Lumina-T2X Stronger and Faster with Next-DiT](https://github.com/Alpha-VLLM/Lumina-T2X/blob/main/assets/lumina-next.pdf) from Alpha-VLLM, OpenGVLab, Shanghai AI Laboratory.

The abstract from the paper is:

*Lumina-T2X is a nascent family of Flow-based Large Diffusion Transformers (Flag-DiT) that establishes a unified framework for transforming noise into various modalities, such as images and videos, conditioned on text instructions. Despite its promising capabilities, Lumina-T2X still encounters challenges including training instability, slow inference, and extrapolation artifacts. In this paper, we present Lumina-Next, an improved version of Lumina-T2X, showcasing stronger generation performance with increased training and inference efficiency. We begin with a comprehensive analysis of the Flag-DiT architecture and identify several suboptimal components, which we address by introducing the Next-DiT architecture with 3D RoPE and sandwich normalizations. To enable better resolution extrapolation, we thoroughly compare different context extrapolation methods applied to text-to-image generation with 3D RoPE, and propose Frequency- and Time-Aware Scaled RoPE tailored for diffusion transformers. Additionally, we introduce a sigmoid time discretization schedule to reduce sampling steps in solving the Flow ODE and the Context Drop method to merge redundant visual tokens for faster network evaluation, effectively boosting the overall sampling speed. Thanks to these improvements, Lumina-Next not only improves the quality and efficiency of basic text-to-image generation but also demonstrates superior resolution extrapolation capabilities and multilingual generation using decoder-based LLMs as the text encoder, all in a zero-shot manner. To further validate Lumina-Next as a versatile generative framework, we instantiate it on diverse tasks including visual recognition, multi-view, audio, music, and point cloud generation, showcasing strong performance across these domains. By releasing all codes and model weights at https://github.com/Alpha-VLLM/Lumina-T2X, we aim to advance the development of next-generation generative AI capable of universal modeling.*

**Highlights**: Lumina-Next is a next-generation Diffusion Transformer that significantly enhances text-to-image generation, multilingual generation, and multitask performance by introducing the Next-DiT architecture, 3D RoPE, and frequency- and time-aware RoPE, among other improvements.

Lumina-Next has the following components:
* It improves sampling efficiency with fewer and faster Steps.
* It uses a Next-DiT as a transformer backbone with Sandwichnorm 3D RoPE, and Grouped-Query Attention.
* It uses a Frequency- and Time-Aware Scaled RoPE.

---

[Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers](https://huggingface.co/papers/2405.05945) from Alpha-VLLM, OpenGVLab, Shanghai AI Laboratory.

The abstract from the paper is:

*Sora unveils the potential of scaling Diffusion Transformer for generating photorealistic images and videos at arbitrary resolutions, aspect ratios, and durations, yet it still lacks sufficient implementation details. In this technical report, we introduce the Lumina-T2X family - a series of Flow-based Large Diffusion Transformers (Flag-DiT) equipped with zero-initialized attention, as a unified framework designed to transform noise into images, videos, multi-view 3D objects, and audio clips conditioned on text instructions. By tokenizing the latent spatial-temporal space and incorporating learnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X seamlessly unifies the representations of different modalities across various spatial-temporal resolutions. This unified approach enables training within a single framework for different modalities and allows for flexible generation of multimodal data at any resolution, aspect ratio, and length during inference. Advanced techniques like RoPE, RMSNorm, and flow matching enhance the stability, flexibility, and scalability of Flag-DiT, enabling models of Lumina-T2X to scale up to 7 billion parameters and extend the context window to 128K tokens. This is particularly beneficial for creating ultra-high-definition images with our Lumina-T2I model and long 720p videos with our Lumina-T2V model. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT, requires only 35% of the training computational costs of a 600-million-parameter naive DiT. Our further comprehensive analysis underscores Lumina-T2X's preliminary capability in resolution extrapolation, high-resolution editing, generating consistent 3D views, and synthesizing videos with seamless transitions. We expect that the open-sourcing of Lumina-T2X will further foster creativity, transparency, and diversity in the generative AI community.*


You can find the original codebase at [Alpha-VLLM](https://github.com/Alpha-VLLM/Lumina-T2X) and all the available checkpoints at [Alpha-VLLM Lumina Family](https://huggingface.co/collections/Alpha-VLLM/lumina-family-66423205bedb81171fd0644b).

**Highlights**: Lumina-T2X supports Any Modality, Resolution, and Duration.

Lumina-T2X has the following components:
* It uses a Flow-based Large Diffusion Transformer as the backbone
* It supports different any modalities with one backbone and corresponding encoder, decoder.

This pipeline was contributed by [PommesPeter](https://github.com/PommesPeter). The original codebase can be found [here](https://github.com/Alpha-VLLM/Lumina-T2X). The original weights can be found under [hf.co/Alpha-VLLM](https://huggingface.co/Alpha-VLLM).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

### Inference (Text-to-Image)

Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency.

First, load the pipeline:

```python
from diffusers import LuminaPipeline
import torch

pipeline = LuminaPipeline.from_pretrained(
	"Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16
).to("cuda")
```

Then change the memory layout of the pipelines `transformer` and `vae` components to `torch.channels-last`:

```python
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.vae.to(memory_format=torch.channels_last)
```

Finally, compile the components and run inference:

```python
pipeline.transformer = torch.compile(pipeline.transformer, mode="max-autotune", fullgraph=True)
pipeline.vae.decode = torch.compile(pipeline.vae.decode, mode="max-autotune", fullgraph=True)

image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures").images[0]
```

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [LuminaPipeline](/docs/diffusers/main/en/api/pipelines/lumina#diffusers.LuminaPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, Transformer2DModel, LuminaPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "Alpha-VLLM/Lumina-Next-SFT-diffusers",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = Transformer2DModel.from_pretrained(
    "Alpha-VLLM/Lumina-Next-SFT-diffusers",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = LuminaPipeline.from_pretrained(
    "Alpha-VLLM/Lumina-Next-SFT-diffusers",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("lumina.png")
```

## LuminaPipeline[[diffusers.LuminaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LuminaPipeline</name><anchor>diffusers.LuminaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina/pipeline_lumina.py#L136</source><parameters>[{"name": "transformer", "val": ": LuminaNextDiT2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": GemmaPreTrainedModel"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`GemmaPreTrainedModel`) --
  Frozen Gemma text-encoder.
- **tokenizer** (`GemmaTokenizer` or `GemmaTokenizerFast`) --
  Gemma tokenizer.
- **transformer** ([Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel)) --
  A text conditioned `Transformer2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Lumina-T2I.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LuminaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina/pipeline_lumina.py#L632</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 30"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "scaling_watershed", "val": ": typing.Optional[float] = 1.0"}, {"name": "proportional_attn", "val": ": typing.Optional[bool] = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 30) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Lumina-T2I this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **max_sequence_length** (`int` defaults to 120) --
  Maximum sequence length to use with the `prompt`.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LuminaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import LuminaPipeline

>>> pipe = LuminaPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16)
>>> # Enable memory optimizations.
>>> pipe.enable_model_cpu_offload()

>>> prompt = "Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LuminaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina/pipeline_lumina.py#L262</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  Lumina-T2I, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Lumina-T2I, it's should be the embeddings of the "" string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 256) -- Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/lumina.md" />

### Qwenimage
https://huggingface.co/docs/diffusers/main/api/pipelines/qwenimage.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# QwenImage

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Qwen-Image from the Qwen team is an image generation foundation model in the Qwen series that achieves significant advances in complex text rendering and precise image editing. Experiments show strong general capabilities in both image generation and editing, with exceptional performance in text rendering, especially for Chinese.

Qwen-Image comes in the following variants:

| model type | model id |
|:----------:|:--------:|
| Qwen-Image | [`Qwen/Qwen-Image`](https://huggingface.co/Qwen/Qwen-Image) |
| Qwen-Image-Edit | [`Qwen/Qwen-Image-Edit`](https://huggingface.co/Qwen/Qwen-Image-Edit) |
| Qwen-Image-Edit Plus | [Qwen/Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) |

> [!TIP]
> [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.

## LoRA for faster inference

Use a LoRA from `lightx2v/Qwen-Image-Lightning` to speed up inference by reducing the
number of steps. Refer to the code snippet below:

<details>
<summary>Code</summary>

```py
from diffusers import DiffusionPipeline, FlowMatchEulerDiscreteScheduler
import torch 
import math

ckpt_id = "Qwen/Qwen-Image"

# From
# https://github.com/ModelTC/Qwen-Image-Lightning/blob/342260e8f5468d2f24d084ce04f55e101007118b/generate_with_diffusers.py#L82C9-L97C10
scheduler_config = {
    "base_image_seq_len": 256,
    "base_shift": math.log(3),  # We use shift=3 in distillation
    "invert_sigmas": False,
    "max_image_seq_len": 8192,
    "max_shift": math.log(3),  # We use shift=3 in distillation
    "num_train_timesteps": 1000,
    "shift": 1.0,
    "shift_terminal": None,  # set shift_terminal to None
    "stochastic_sampling": False,
    "time_shift_type": "exponential",
    "use_beta_sigmas": False,
    "use_dynamic_shifting": True,
    "use_exponential_sigmas": False,
    "use_karras_sigmas": False,
}
scheduler = FlowMatchEulerDiscreteScheduler.from_config(scheduler_config)
pipe = DiffusionPipeline.from_pretrained(
    ckpt_id, scheduler=scheduler, torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(
    "lightx2v/Qwen-Image-Lightning", weight_name="Qwen-Image-Lightning-8steps-V1.0.safetensors"
)

prompt = "a tiny astronaut hatching from an egg on the moon, Ultra HD, 4K, cinematic composition."
negative_prompt = " "
image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=1024,
    height=1024,
    num_inference_steps=8,
    true_cfg_scale=1.0,
    generator=torch.manual_seed(0),
).images[0]
image.save("qwen_fewsteps.png")
```

</details>

> [!TIP]
> The `guidance_scale` parameter in the pipeline is there to support future guidance-distilled models when they come up. Note that passing `guidance_scale` to the pipeline is ineffective. To enable classifier-free guidance, please pass `true_cfg_scale` and `negative_prompt` (even an empty negative prompt like " ") should enable classifier-free guidance computations.

## Multi-image reference with QwenImageEditPlusPipeline

With [QwenImageEditPlusPipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImageEditPlusPipeline), one can provide multiple images as input reference.

```
import torch
from PIL import Image
from diffusers import QwenImageEditPlusPipeline
from diffusers.utils import load_image

pipe = QwenImageEditPlusPipeline.from_pretrained(
    "Qwen/Qwen-Image-Edit-2509", torch_dtype=torch.bfloat16
).to("cuda")

image_1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/grumpy.jpg")
image_2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peng.png")
image = pipe(
    image=[image_1, image_2], 
    prompt='''put the penguin and the cat at a game show called "Qwen Edit Plus Games"''', 
    num_inference_steps=50
).images[0]
```

## QwenImagePipeline[[diffusers.QwenImagePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImagePipeline</name><anchor>diffusers.QwenImagePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L132</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The QwenImage pipeline for text-to-image generation.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImagePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L451</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is enabled by
  setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale encourages to
  generate images that are closely linked to the text `prompt`, usually at the expense of lower image
  quality.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImagePipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import QwenImagePipeline

>>> pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=50).images[0]
>>> image.save("qwenimage.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImagePipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L359</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImagePipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L386</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImagePipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L346</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImagePipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L372</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImagePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage.py#L226</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageImg2ImgPipeline[[diffusers.QwenImageImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageImg2ImgPipeline</name><anchor>diffusers.QwenImageImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L134</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The QwenImage pipeline for text-to-image generation.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L525</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is enabled by
  setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale encourages to
  generate images that are closely linked to the text `prompt`, usually at the expense of lower image
  quality.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import QwenImageImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = QwenImageImg2ImgPipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
>>> pipe = pipe.to("cuda")
>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> init_image = load_image(url).resize((1024, 1024))
>>> prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney"
>>> images = pipe(prompt=prompt, negative_prompt=" ", image=init_image, strength=0.95).images[0]
>>> images.save("qwenimage_img2img.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImageImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L408</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImageImg2ImgPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L435</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImageImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L395</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImageImg2ImgPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L421</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_img2img.py#L269</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageInpaintPipeline[[diffusers.QwenImageInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageInpaintPipeline</name><anchor>diffusers.QwenImageInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L137</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The QwenImage pipeline for text-to-image generation.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L635</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is enabled by
  setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale encourages to
  generate images that are closely linked to the text `prompt`, usually at the expense of lower image
  quality.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will be generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import QwenImageInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = QwenImageInpaintPipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image = pipe(prompt=prompt, negative_prompt=" ", image=source, mask_image=mask, strength=0.85).images[0]
>>> image.save("qwenimage_inpainting.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImageInpaintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L435</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImageInpaintPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L462</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImageInpaintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L422</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImageInpaintPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L448</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_inpaint.py#L280</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageEditPipeline[[diffusers.QwenImageEditPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageEditPipeline</name><anchor>diffusers.QwenImageEditPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L165</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "processor", "val": ": Qwen2VLProcessor"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Qwen-Image-Edit pipeline for image editing.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageEditPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L546</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  true_cfg_scale (`float`, *optional*, defaults to 1.0): Guidance scale as defined in [Classifier-Free
  Diffusion Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is
  enabled by setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale
  encourages to generate images that are closely linked to the text `prompt`, usually at the expense of
  lower image quality.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageEditPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from PIL import Image
>>> from diffusers import QwenImageEditPipeline
>>> from diffusers.utils import load_image

>>> pipe = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/yarn-art-pikachu.png"
... ).convert("RGB")
>>> prompt = (
...     "Make Pikachu hold a sign that says 'Qwen Edit is awesome', yarn art style, detailed, vibrant colors"
... )
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(image, prompt, num_inference_steps=50).images[0]
>>> image.save("qwenimage_edit.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImageEditPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L431</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImageEditPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L458</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImageEditPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L418</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImageEditPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L444</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageEditPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py#L273</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **image** (`torch.Tensor`, *optional*) --
  image to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageEditInpaintPipeline[[diffusers.QwenImageEditInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageEditInpaintPipeline</name><anchor>diffusers.QwenImageEditInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L167</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "processor", "val": ": Qwen2VLProcessor"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Qwen-Image-Edit pipeline for image editing.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageEditInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L679</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  true_cfg_scale (`float`, *optional*, defaults to 1.0): Guidance scale as defined in [Classifier-Free
  Diffusion Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is
  enabled by setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale
  encourages to generate images that are closely linked to the text `prompt`, usually at the expense of
  lower image quality.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will ge generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageEditInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from PIL import Image
>>> from diffusers import QwenImageEditInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = QwenImageEditInpaintPipeline.from_pretrained("Qwen/Qwen-Image-Edit", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"

>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image = pipe(
...     prompt=prompt, negative_prompt=" ", image=source, mask_image=mask, strength=1.0, num_inference_steps=50
... ).images[0]
>>> image.save("qwenimage_inpainting.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImageEditInpaintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L477</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImageEditInpaintPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L504</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImageEditInpaintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L464</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImageEditInpaintPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L490</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageEditInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_inpaint.py#L285</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **image** (`torch.Tensor`, *optional*) --
  image to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageControlNetPipeline[[diffusers.QwenImageControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageControlNetPipeline</name><anchor>diffusers.QwenImageControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L192</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_qwenimage.QwenImageControlNetModel, diffusers.models.controlnets.controlnet_qwenimage.QwenImageMultiControlNetModel]"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The QwenImage pipeline for text-to-image generation.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L551</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is enabled by
  setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale encourages to
  generate images that are closely linked to the text `prompt`, usually at the expense of lower image
  quality.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageControlNetPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers.utils import load_image
>>> from diffusers import QwenImageControlNetModel, QwenImageMultiControlNetModel, QwenImageControlNetPipeline

>>> # QwenImageControlNetModel
>>> controlnet = QwenImageControlNetModel.from_pretrained(
...     "InstantX/Qwen-Image-ControlNet-Union", torch_dtype=torch.bfloat16
... )
>>> pipe = QwenImageControlNetPipeline.from_pretrained(
...     "Qwen/Qwen-Image", controlnet=controlnet, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")
>>> prompt = "Aesthetics art, traditional asian pagoda, elaborate golden accents, sky blue and white color palette, swirling cloud pattern, digital illustration, east asian architecture, ornamental rooftop, intricate detailing on building, cultural representation."
>>> negative_prompt = " "
>>> control_image = load_image(
...     "https://huggingface.co/InstantX/Qwen-Image-ControlNet-Union/resolve/main/conds/canny.png"
... )
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(
...     prompt,
...     negative_prompt=negative_prompt,
...     control_image=control_image,
...     controlnet_conditioning_scale=1.0,
...     num_inference_steps=30,
...     true_cfg_scale=4.0,
... ).images[0]
>>> image.save("qwenimage_cn_union.png")

>>> # QwenImageMultiControlNetModel
>>> controlnet = QwenImageControlNetModel.from_pretrained(
...     "InstantX/Qwen-Image-ControlNet-Union", torch_dtype=torch.bfloat16
... )
>>> controlnet = QwenImageMultiControlNetModel([controlnet])
>>> pipe = QwenImageControlNetPipeline.from_pretrained(
...     "Qwen/Qwen-Image", controlnet=controlnet, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")
>>> prompt = "Aesthetics art, traditional asian pagoda, elaborate golden accents, sky blue and white color palette, swirling cloud pattern, digital illustration, east asian architecture, ornamental rooftop, intricate detailing on building, cultural representation."
>>> negative_prompt = " "
>>> control_image = load_image(
...     "https://huggingface.co/InstantX/Qwen-Image-ControlNet-Union/resolve/main/conds/canny.png"
... )
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(
...     prompt,
...     negative_prompt=negative_prompt,
...     control_image=[control_image, control_image],
...     controlnet_conditioning_scale=[0.5, 0.5],
...     num_inference_steps=30,
...     true_cfg_scale=4.0,
... ).images[0]
>>> image.save("qwenimage_cn_union_multi.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.QwenImageControlNetPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L423</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.QwenImageControlNetPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L450</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.QwenImageControlNetPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L410</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.QwenImageControlNetPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L436</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_controlnet.py#L291</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImageEditPlusPipeline[[diffusers.QwenImageEditPlusPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageEditPlusPipeline</name><anchor>diffusers.QwenImageEditPlusPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_plus.py#L168</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLQwenImage"}, {"name": "text_encoder", "val": ": Qwen2_5_VLForConditionalGeneration"}, {"name": "tokenizer", "val": ": Qwen2Tokenizer"}, {"name": "processor", "val": ": Qwen2VLProcessor"}, {"name": "transformer", "val": ": QwenImageTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Qwen2.5-VL-7B-Instruct`) --
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), specifically the
  [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) variant.
- **tokenizer** (`QwenTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Qwen-Image-Edit pipeline for image editing.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.QwenImageEditPlusPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_plus.py#L515</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "true_cfg_scale", "val": ": float = 4.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  true_cfg_scale (`float`, *optional*, defaults to 1.0): Guidance scale as defined in [Classifier-Free
  Diffusion Guidance](https://huggingface.co/papers/2207.12598). `true_cfg_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Classifier-free guidance is
  enabled by setting `true_cfg_scale > 1` and a provided `negative_prompt`. Higher guidance scale
  encourages to generate images that are closely linked to the text `prompt`, usually at the expense of
  lower image quality.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to None) --
  A guidance scale value for guidance distilled models. Unlike the traditional classifier-free guidance
  where the guidance scale is applied during inference through noise prediction rescaling, guidance
  distilled models take the guidance scale directly as an input parameter during forward pass. Guidance
  scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images
  that are closely linked to the text `prompt`, usually at the expense of lower image quality. This
  parameter in the pipeline is there to support future guidance-distilled models when they come up. It is
  ignored when not using guidance distilled models. To enable traditional classifier-free guidance,
  please pass `true_cfg_scale > 1.0` and `negative_prompt` (even an empty negative prompt like " " should
  enable classifier-free guidance computations).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.qwenimage.QwenImagePipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.qwenimage.QwenImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.qwenimage.QwenImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.QwenImageEditPlusPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from PIL import Image
>>> from diffusers import QwenImageEditPlusPipeline
>>> from diffusers.utils import load_image

>>> pipe = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2509", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/yarn-art-pikachu.png"
... ).convert("RGB")
>>> prompt = (
...     "Make Pikachu hold a sign that says 'Qwen Edit is awesome', yarn art style, detailed, vibrant colors"
... )
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(image, prompt, num_inference_steps=50).images[0]
>>> image.save("qwenimage_edit_plus.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.QwenImageEditPlusPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit_plus.py#L287</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **image** (`torch.Tensor`, *optional*) --
  image to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## QwenImagePipelineOutput[[diffusers.pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput</name><anchor>diffusers.pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/qwenimage/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/qwenimage.md" />

### Latte
https://huggingface.co/docs/diffusers/main/api/pipelines/latte.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# Latte

![latte text-to-video](https://github.com/Vchitect/Latte/blob/52bc0029899babbd6e9250384c83d8ed2670ff7a/visuals/latte.gif?raw=true)

[Latte: Latent Diffusion Transformer for Video Generation](https://huggingface.co/papers/2401.03048) from Monash University, Shanghai AI Lab, Nanjing University, and Nanyang Technological University.

The abstract from the paper is:

*We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.*

**Highlights**: Latte is a latent diffusion transformer proposed as a backbone for modeling different modalities (trained for text-to-video generation here). It achieves state-of-the-art performance across four standard video benchmarks - [FaceForensics](https://huggingface.co/papers/1803.09179), [SkyTimelapse](https://huggingface.co/papers/1709.07592), [UCF101](https://huggingface.co/papers/1212.0402) and [Taichi-HD](https://huggingface.co/papers/2003.00196). To prepare and download the datasets for evaluation, please refer to [this https URL](https://github.com/Vchitect/Latte/blob/main/docs/datasets_evaluation.md).

This pipeline was contributed by [maxin-cn](https://github.com/maxin-cn). The original codebase can be found [here](https://github.com/Vchitect/Latte). The original weights can be found under [hf.co/maxin-cn](https://huggingface.co/maxin-cn).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

### Inference

Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency.

First, load the pipeline:

```python
import torch
from diffusers import LattePipeline

pipeline = LattePipeline.from_pretrained(
	"maxin-cn/Latte-1", torch_dtype=torch.float16
).to("cuda")
```

Then change the memory layout of the pipelines `transformer` and `vae` components to `torch.channels-last`:

```python
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.vae.to(memory_format=torch.channels_last)
```

Finally, compile the components and run inference:

```python
pipeline.transformer = torch.compile(pipeline.transformer)
pipeline.vae.decode = torch.compile(pipeline.vae.decode)

video = pipeline(prompt="A dog wearing sunglasses floating in space, surreal, nebulae in background").frames[0]
```

The [benchmark](https://gist.github.com/a-r-r-o-w/4e1694ca46374793c0361d740a99ff19) results on an 80GB A100 machine are:

```
Without torch.compile(): Average inference time: 16.246 seconds.
With torch.compile(): Average inference time: 14.573 seconds.
```

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [LattePipeline](/docs/diffusers/main/en/api/pipelines/latte#diffusers.LattePipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, LatteTransformer3DModel, LattePipeline
from diffusers.utils import export_to_gif
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "maxin-cn/Latte-1",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = LatteTransformer3DModel.from_pretrained(
    "maxin-cn/Latte-1",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = LattePipeline.from_pretrained(
    "maxin-cn/Latte-1",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "A small cactus with a happy face in the Sahara desert."
video = pipeline(prompt).frames[0]
export_to_gif(video, "latte.gif")
```

## LattePipeline[[diffusers.LattePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LattePipeline</name><anchor>diffusers.LattePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latte/pipeline_latte.py#L145</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": LatteTransformer3DModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Latte uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([LatteTransformer3DModel](/docs/diffusers/main/en/api/models/latte_transformer3d#diffusers.LatteTransformer3DModel)) --
  A text conditioned `LatteTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using Latte.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LattePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latte/pipeline_latte.py#L613</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "video_length", "val": ": int = 16"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "mask_feature", "val": ": bool = True"}, {"name": "enable_temporal_attentions", "val": ": bool = True"}, {"name": "decode_chunk_size", "val": ": int = 14"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the video generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the video generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate videos that are closely linked to
  the text `prompt`, usually at the expense of lower video quality.
- **video_length** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated video.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For Latte this negative prompt should be "". If not provided,
  negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate video. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable[[int, int, Dict], None]`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A callback function or a list of callback functions to be called at the end of each denoising step.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  A list of tensor inputs that should be passed to the callback function. If not defined, all tensor
  inputs will be passed.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **mask_feature** (`bool` defaults to `True`) -- If set to `True`, the text embeddings will be masked.
- **enable_temporal_attentions** (`bool`, *optional*, defaults to `True`) -- Whether to enable temporal attentions
- **decode_chunk_size** (`int`, *optional*) --
  The number of frames to decode at a time. Higher chunk size leads to better temporal consistency at the
  expense of more memory usage. By default, the decoder decodes all frames at once for maximal quality.
  For lower memory usage, reduce `decode_chunk_size`.</paramsdesc><paramgroups>0</paramgroups><rettype>`LattePipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `LattePipelineOutput` is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LattePipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import LattePipeline
>>> from diffusers.utils import export_to_gif

>>> # You can replace the checkpoint id with "maxin-cn/Latte-1" too.
>>> pipe = LattePipeline.from_pretrained("maxin-cn/Latte-1", torch_dtype=torch.float16)
>>> # Enable memory optimizations.
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A small cactus with a happy face in the Sahara desert."
>>> videos = pipe(prompt).frames[0]
>>> export_to_gif(videos, "latte.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LattePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latte/pipeline_latte.py#L206</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "mask_feature", "val": ": bool = True"}, {"name": "dtype", "val": " = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the video generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  Latte, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of video that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For Latte, it's should be the embeddings of the "" string.
- **clean_caption** (bool, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **mask_feature** -- (bool, defaults to `True`):
  If `True`, the function will mask the text embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/latte.md" />

### Self-Attention Guidance
https://huggingface.co/docs/diffusers/main/api/pipelines/self_attention_guidance.md

# Self-Attention Guidance

[Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al.

The abstract from the paper is:

*Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.*

You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionSAGPipeline[[diffusers.StableDiffusionSAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionSAGPipeline</name><anchor>diffusers.StableDiffusionSAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_sag/pipeline_stable_diffusion_sag.py#L110</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionSAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_sag/pipeline_stable_diffusion_sag.py#L573</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "sag_scale", "val": ": float = 0.75"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **sag_scale** (`float`, *optional*, defaults to 0.75) --
  Chosen between [0, 1.0] for better quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. If not provided, embeddings are computed from the
  `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionSAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionSAGPipeline

>>> pipe = StableDiffusionSAGPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, sag_scale=0.75).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionSAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_sag/pipeline_stable_diffusion_sag.py#L211</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/self_attention_guidance.md" />

### unCLIP
https://huggingface.co/docs/diffusers/main/api/pipelines/unclip.md

# unCLIP

[Hierarchical Text-Conditional Image Generation with CLIP Latents](https://huggingface.co/papers/2204.06125) is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain's [karlo](https://github.com/kakaobrain/karlo).

The abstract from the paper is following:

*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*

You can find lucidrains' DALL-E 2 recreation at [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## UnCLIPPipeline[[diffusers.UnCLIPPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UnCLIPPipeline</name><anchor>diffusers.UnCLIPPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py#L41</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "decoder", "val": ": UNet2DConditionModel"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_proj", "val": ": UnCLIPTextProjModel"}, {"name": "super_res_first", "val": ": UNet2DModel"}, {"name": "super_res_last", "val": ": UNet2DModel"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "decoder_scheduler", "val": ": UnCLIPScheduler"}, {"name": "super_res_scheduler", "val": ": UnCLIPScheduler"}]</parameters><paramsdesc>- **text_encoder** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Frozen text-encoder.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **text_proj** (`UnCLIPTextProjModel`) --
  Utility class to prepare and combine the embeddings before they are passed to the decoder.
- **decoder** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  The decoder to invert the image embedding into an image.
- **super_res_first** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  Super resolution UNet. Used in all but the last step of the super resolution diffusion process.
- **super_res_last** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  Super resolution UNet. Used in the last step of the super resolution diffusion process.
- **prior_scheduler** (`UnCLIPScheduler`) --
  Scheduler used in the prior denoising process (a modified [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)).
- **decoder_scheduler** (`UnCLIPScheduler`) --
  Scheduler used in the decoder denoising process (a modified [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)).
- **super_res_scheduler** (`UnCLIPScheduler`) --
  Scheduler used in the super resolution denoising process (a modified [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using unCLIP.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.UnCLIPPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py#L219</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "decoder_num_inference_steps", "val": ": int = 25"}, {"name": "super_res_num_inference_steps", "val": ": int = 7"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prior_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decoder_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "super_res_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "text_model_output", "val": ": typing.Union[transformers.models.clip.modeling_clip.CLIPTextModelOutput, typing.Tuple, NoneType] = None"}, {"name": "text_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "decoder_guidance_scale", "val": ": float = 8.0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide image generation. This can only be left undefined if `text_model_output`
  and `text_attention_mask` is passed.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps for the prior. More denoising steps usually lead to a higher quality
  image at the expense of slower inference.
- **decoder_num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
  image at the expense of slower inference.
- **super_res_num_inference_steps** (`int`, *optional*, defaults to 7) --
  The number of denoising steps for super resolution. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prior_latents** (`torch.Tensor` of shape (batch size, embeddings dimension), *optional*) --
  Pre-generated noisy latents to be used as inputs for the prior.
- **decoder_latents** (`torch.Tensor` of shape (batch size, channels, height, width), *optional*) --
  Pre-generated noisy latents to be used as inputs for the decoder.
- **super_res_latents** (`torch.Tensor` of shape (batch size, channels, super res height, super res width), *optional*) --
  Pre-generated noisy latents to be used as inputs for the decoder.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **decoder_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **text_model_output** (`CLIPTextModelOutput`, *optional*) --
  Pre-defined `CLIPTextModel` outputs that can be derived from the text encoder. Pre-defined text
  outputs can be passed for tasks like text embedding interpolations. Make sure to also pass
  `text_attention_mask` in this case. `prompt` can the be left `None`.
- **text_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention
  masks are necessary when passing `text_model_output`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.








</div></div>

## UnCLIPImageVariationPipeline[[diffusers.UnCLIPImageVariationPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UnCLIPImageVariationPipeline</name><anchor>diffusers.UnCLIPImageVariationPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py#L46</source><parameters>[{"name": "decoder", "val": ": UNet2DConditionModel"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_proj", "val": ": UnCLIPTextProjModel"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "super_res_first", "val": ": UNet2DModel"}, {"name": "super_res_last", "val": ": UNet2DModel"}, {"name": "decoder_scheduler", "val": ": UnCLIPScheduler"}, {"name": "super_res_scheduler", "val": ": UnCLIPScheduler"}]</parameters><paramsdesc>- **text_encoder** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Frozen text-encoder.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  Model that extracts features from generated images to be used as inputs for the `image_encoder`.
- **image_encoder** ([CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection)) --
  Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_proj** (`UnCLIPTextProjModel`) --
  Utility class to prepare and combine the embeddings before they are passed to the decoder.
- **decoder** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  The decoder to invert the image embedding into an image.
- **super_res_first** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  Super resolution UNet. Used in all but the last step of the super resolution diffusion process.
- **super_res_last** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  Super resolution UNet. Used in the last step of the super resolution diffusion process.
- **decoder_scheduler** (`UnCLIPScheduler`) --
  Scheduler used in the decoder denoising process (a modified [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)).
- **super_res_scheduler** (`UnCLIPScheduler`) --
  Scheduler used in the super resolution denoising process (a modified [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline to generate image variations from an input image using UnCLIP.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.UnCLIPImageVariationPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py#L207</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor, NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "decoder_num_inference_steps", "val": ": int = 25"}, {"name": "super_res_num_inference_steps", "val": ": int = 7"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "decoder_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "super_res_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "image_embeddings", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decoder_guidance_scale", "val": ": float = 8.0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.Tensor`) --
  `Image` or tensor representing an image batch to be used as the starting point. If you provide a
  tensor, it needs to be compatible with the `CLIPImageProcessor`
  [configuration](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json).
  Can be left as `None` only when `image_embeddings` are passed.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **decoder_num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
  image at the expense of slower inference.
- **super_res_num_inference_steps** (`int`, *optional*, defaults to 7) --
  The number of denoising steps for super resolution. More denoising steps usually lead to a higher
  quality image at the expense of slower inference.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **decoder_latents** (`torch.Tensor` of shape (batch size, channels, height, width), *optional*) --
  Pre-generated noisy latents to be used as inputs for the decoder.
- **super_res_latents** (`torch.Tensor` of shape (batch size, channels, super res height, super res width), *optional*) --
  Pre-generated noisy latents to be used as inputs for the decoder.
- **decoder_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **image_embeddings** (`torch.Tensor`, *optional*) --
  Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings
  can be passed for tasks like image interpolations. `image` can be left as `None`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.








</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/unclip.md" />

### DiT
https://huggingface.co/docs/diffusers/main/api/pipelines/dit.md

# DiT

[Scalable Diffusion Models with Transformers](https://huggingface.co/papers/2212.09748) (DiT) is by William Peebles and Saining Xie.

The abstract from the paper is:

*We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*

The original codebase can be found at [facebookresearch/dit](https://github.com/facebookresearch/dit).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## DiTPipeline[[diffusers.DiTPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DiTPipeline</name><anchor>diffusers.DiTPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py#L40</source><parameters>[{"name": "transformer", "val": ": DiTTransformer2DModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "id2label", "val": ": typing.Optional[typing.Dict[int, str]] = None"}]</parameters><paramsdesc>- **transformer** ([DiTTransformer2DModel](/docs/diffusers/main/en/api/models/dit_transformer2d#diffusers.DiTTransformer2DModel)) --
  A class conditioned `DiTTransformer2DModel` to denoise the encoded image latents. Initially published as
  [`Transformer2DModel`](https://huggingface.co/facebook/DiT-XL-2-256/blob/main/transformer/config.json#L2)
  in the config, but the mismatch can be ignored.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image generation based on a Transformer backbone instead of a UNet.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.DiTPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py#L103</source><parameters>[{"name": "class_labels", "val": ": typing.List[int]"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **class_labels** (List[int]) --
  List of ImageNet class labels for the images to be generated.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **num_inference_steps** (`int`, *optional*, defaults to 250) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.DiTPipeline.__call__.example">

Examples:

```py
>>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler
>>> import torch

>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe = pipe.to("cuda")

>>> # pick words from Imagenet class labels
>>> pipe.labels  # to print all available words

>>> # pick words that exist in ImageNet
>>> words = ["white shark", "umbrella"]

>>> class_ids = pipe.get_label_ids(words)

>>> generator = torch.manual_seed(33)
>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator)

>>> image = output.images[0]  # label 'white shark'
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_label_ids</name><anchor>diffusers.DiTPipeline.get_label_ids</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py#L78</source><parameters>[{"name": "label", "val": ": typing.Union[str, typing.List[str]]"}]</parameters><paramsdesc>- **label** (`str` or `dict` of `str`) --
  Label strings to be mapped to class ids.</paramsdesc><paramgroups>0</paramgroups><rettype>`list` of `int`</rettype><retdesc>Class ids to be processed by pipeline.</retdesc></docstring>


Map label strings from ImageNet to corresponding class ids.








</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/dit.md" />

### aMUSEd
https://huggingface.co/docs/diffusers/main/api/pipelines/amused.md

# aMUSEd

aMUSEd was introduced in [aMUSEd: An Open MUSE Reproduction](https://huggingface.co/papers/2401.01808) by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen.

Amused is a lightweight text to image model based off of the [MUSE](https://huggingface.co/papers/2301.00704) architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once.

Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes.

The abstract from the paper is:

*We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE's parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions.*

| Model | Params |
|-------|--------|
| [amused-256](https://huggingface.co/amused/amused-256) | 603M |
| [amused-512](https://huggingface.co/amused/amused-512) | 608M |

## AmusedPipeline[[diffusers.AmusedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AmusedPipeline</name><anchor>diffusers.AmusedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused.py#L50</source><parameters>[{"name": "vqvae", "val": ": VQModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "transformer", "val": ": UVit2DModel"}, {"name": "scheduler", "val": ": AmusedScheduler"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AmusedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused.py#L83</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "guidance_scale", "val": ": float = 10.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "latents", "val": ": typing.Optional[torch.IntTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": " = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "micro_conditioning_aesthetic_score", "val": ": int = 6"}, {"name": "micro_conditioning_crop_coord", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "temperature", "val": ": typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0)"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.transformer.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 16) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.IntTensor`, *optional*) --
  Pre-generated tokens representing latent vectors in `self.vqvae`, to be used as inputs for image
  generation. If not provided, the starting latents will be completely masked.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument. A single vector from the
  pooled and projected final hidden states.
- **encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Pre-generated penultimate hidden states from the text encoder providing additional text conditioning.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Analogous to `encoder_hidden_states` for the positive prompt.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **micro_conditioning_aesthetic_score** (`int`, *optional*, defaults to 6) --
  The targeted aesthetic score according to the laion aesthetic classifier. See
  https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **micro_conditioning_crop_coord** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  The targeted height, width crop coordinates. See the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **temperature** (`Union[int, Tuple[int, int], List[int]]`, *optional*, defaults to (2, 0)) --
  Configures the temperature scheduler on `self.scheduler` see `AmusedScheduler#set_timesteps`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a
`tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AmusedPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AmusedPipeline

>>> pipe = AmusedPipeline.from_pretrained("amused/amused-512", variant="fp16", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.AmusedPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AmusedImg2ImgPipeline</name><anchor>diffusers.AmusedImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused_img2img.py#L60</source><parameters>[{"name": "vqvae", "val": ": VQModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "transformer", "val": ": UVit2DModel"}, {"name": "scheduler", "val": ": AmusedScheduler"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AmusedImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused_img2img.py#L98</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.5"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "guidance_scale", "val": ": float = 10.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": " = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "micro_conditioning_aesthetic_score", "val": ": int = 6"}, {"name": "micro_conditioning_crop_coord", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "temperature", "val": ": typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0)"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **strength** (`float`, *optional*, defaults to 0.5) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 12) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument. A single vector from the
  pooled and projected final hidden states.
- **encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Pre-generated penultimate hidden states from the text encoder providing additional text conditioning.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Analogous to `encoder_hidden_states` for the positive prompt.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **micro_conditioning_aesthetic_score** (`int`, *optional*, defaults to 6) --
  The targeted aesthetic score according to the laion aesthetic classifier. See
  https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **micro_conditioning_crop_coord** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  The targeted height, width crop coordinates. See the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **temperature** (`Union[int, Tuple[int, int], List[int]]`, *optional*, defaults to (2, 0)) --
  Configures the temperature scheduler on `self.scheduler` see `AmusedScheduler#set_timesteps`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a
`tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AmusedImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AmusedImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = AmusedImg2ImgPipeline.from_pretrained(
...     "amused/amused-512", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "winter mountains"
>>> input_image = (
...     load_image(
...         "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg"
...     )
...     .resize((512, 512))
...     .convert("RGB")
... )
>>> image = pipe(prompt, input_image).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedImg2ImgPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.AmusedImg2ImgPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedImg2ImgPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AmusedInpaintPipeline</name><anchor>diffusers.AmusedInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused_inpaint.py#L68</source><parameters>[{"name": "vqvae", "val": ": VQModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "transformer", "val": ": UVit2DModel"}, {"name": "scheduler", "val": ": AmusedScheduler"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AmusedInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/amused/pipeline_amused_inpaint.py#L114</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "guidance_scale", "val": ": float = 10.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": " = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "micro_conditioning_aesthetic_score", "val": ": int = 6"}, {"name": "micro_conditioning_crop_coord", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "temperature", "val": ": typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0)"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 16) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument. A single vector from the
  pooled and projected final hidden states.
- **encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Pre-generated penultimate hidden states from the text encoder providing additional text conditioning.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_encoder_hidden_states** (`torch.Tensor`, *optional*) --
  Analogous to `encoder_hidden_states` for the positive prompt.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **micro_conditioning_aesthetic_score** (`int`, *optional*, defaults to 6) --
  The targeted aesthetic score according to the laion aesthetic classifier. See
  https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **micro_conditioning_crop_coord** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  The targeted height, width crop coordinates. See the micro-conditioning section of
  https://huggingface.co/papers/2307.01952.
- **temperature** (`Union[int, Tuple[int, int], List[int]]`, *optional*, defaults to (2, 0)) --
  Configures the temperature scheduler on `self.scheduler` see `AmusedScheduler#set_timesteps`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a
`tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AmusedInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AmusedInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = AmusedInpaintPipeline.from_pretrained(
...     "amused/amused-512", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "fall mountains"
>>> input_image = (
...     load_image(
...         "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg"
...     )
...     .resize((512, 512))
...     .convert("RGB")
... )
>>> mask = (
...     load_image(
...         "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png"
...     )
...     .resize((512, 512))
...     .convert("L")
... )
>>> pipe(prompt, input_image, mask).images[0].save("out.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedInpaintPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.AmusedInpaintPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.AmusedInpaintPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/amused.md" />

### Stable Audio
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_audio.md

# Stable Audio

Stable Audio was proposed in [Stable Audio Open](https://huggingface.co/papers/2407.14358) by Zach Evans et al. . it takes a text prompt as input and predicts the corresponding sound or music sample.

Stable Audio Open generates variable-length (up to 47s) stereo audio at 44.1kHz from text prompts. It comprises three components: an autoencoder that compresses waveforms into a manageable sequence length, a T5-based text embedding for text conditioning, and a transformer-based diffusion (DiT) model that operates in the latent space of the autoencoder.

Stable Audio is trained on a corpus of around 48k audio recordings, where around 47k are from Freesound and the rest are from the Free Music Archive (FMA). All audio files are licensed under CC0, CC BY, or CC Sampling+. This data is used to train the autoencoder and the DiT.

The abstract of the paper is the following:
*Open generative models are vitally important for the community, allowing for fine-tunes and serving as baselines when presenting new models. However, most current text-to-audio models are private and not accessible for artists and researchers to build upon. Here we describe the architecture and training process of a new open-weights text-to-audio model trained with Creative Commons data. Our evaluation shows that the model's performance is competitive with the state-of-the-art across various metrics. Notably, the reported FDopenl3 results (measuring the realism of the generations) showcase its potential for high-quality stereo sound synthesis at 44.1kHz.*

This pipeline was contributed by [Yoach Lacombe](https://huggingface.co/ylacombe). The original codebase can be found at [Stability-AI/stable-audio-tools](https://github.com/Stability-AI/stable-audio-tools).

## Tips

When constructing a prompt, keep in mind:

* Descriptive prompt inputs work best; use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific where possible (e.g. "melodic techno with a fast beat and synths" works better than "techno").
* Using a *negative prompt* can significantly improve the quality of the generated audio. Try using a negative prompt of "low quality, average quality".

During inference:

* The _quality_ of the generated audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [StableAudioPipeline](/docs/diffusers/main/en/api/pipelines/stable_audio#diffusers.StableAudioPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, StableAudioDiTModel, StableAudioPipeline
from diffusers.utils import export_to_video
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "stabilityai/stable-audio-open-1.0",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = StableAudioDiTModel.from_pretrained(
    "stabilityai/stable-audio-open-1.0",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = StableAudioPipeline.from_pretrained(
    "stabilityai/stable-audio-open-1.0",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "The sound of a hammer hitting a wooden surface."
negative_prompt = "Low quality."
audio = pipeline(
    prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=200,
    audio_end_in_s=10.0,
    num_waveforms_per_prompt=3,
    generator=generator,
).audios

output = audio[0].T.float().cpu().numpy()
sf.write("hammer.wav", output, pipeline.vae.sampling_rate)
```


## StableAudioPipeline[[diffusers.StableAudioPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableAudioPipeline</name><anchor>diffusers.StableAudioPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py#L78</source><parameters>[{"name": "vae", "val": ": AutoencoderOobleck"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "projection_model", "val": ": StableAudioProjectionModel"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.t5.tokenization_t5.T5Tokenizer, transformers.models.t5.tokenization_t5_fast.T5TokenizerFast]"}, {"name": "transformer", "val": ": StableAudioDiTModel"}, {"name": "scheduler", "val": ": EDMDPMSolverMultistepScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderOobleck](/docs/diffusers/main/en/api/models/autoencoder_oobleck#diffusers.AutoencoderOobleck)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([T5EncoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5EncoderModel)) --
  Frozen text-encoder. StableAudio uses the encoder of
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) variant.
- **projection_model** (`StableAudioProjectionModel`) --
  A trained model used to linearly project the hidden-states from the text encoder model and the start and
  end seconds. The projected hidden-states from the encoder and the conditional seconds are concatenated to
  give the input to the transformer model.
- **tokenizer** ([T5Tokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5Tokenizer)) --
  Tokenizer to tokenize text for the frozen text-encoder.
- **transformer** ([StableAudioDiTModel](/docs/diffusers/main/en/api/models/stable_audio_transformer#diffusers.StableAudioDiTModel)) --
  A `StableAudioDiTModel` to denoise the encoded audio latents.
- **scheduler** ([EDMDPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.EDMDPMSolverMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded audio latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-audio generation using StableAudio.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableAudioPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py#L490</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "audio_end_in_s", "val": ": typing.Optional[float] = None"}, {"name": "audio_start_in_s", "val": ": typing.Optional[float] = 0.0"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_waveforms_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "initial_audio_waveforms", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "initial_audio_sampling_rate", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide audio generation. If not defined, you need to pass `prompt_embeds`.
- **audio_end_in_s** (`float`, *optional*, defaults to 47.55) --
  Audio end index in seconds.
- **audio_start_in_s** (`float`, *optional*, defaults to 0) --
  Audio start index in seconds.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  A higher guidance scale value encourages the model to generate audio that is closely linked to the text
  `prompt` at the expense of lower sound quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_waveforms_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of waveforms to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for audio
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **initial_audio_waveforms** (`torch.Tensor`, *optional*) --
  Optional initial audio waveforms to use as the initial audio waveform for generation. Must be of shape
  `(batch_size, num_channels, audio_length)` or `(batch_size, audio_length)`, where `batch_size`
  corresponds to the number of prompts passed to the model.
- **initial_audio_sampling_rate** (`int`, *optional*) --
  Sampling rate of the `initial_audio_waveforms`, if they are provided. Must be the same as the model.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-computed text embeddings from the text encoder model. Can be used to easily tweak text inputs,
  *e.g.* prompt weighting. If not provided, text embeddings will be computed from `prompt` input
  argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-computed negative text embeddings from the text encoder model. Can be used to easily tweak text
  inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be computed from
  `negative_prompt` input argument.
- **attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `prompt_embeds`. If not provided, attention mask will
  be computed from `prompt` input argument.
- **negative_attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `negative_text_audio_duration_embeds`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **output_type** (`str`, *optional*, defaults to `"pt"`) --
  The output format of the generated audio. Choose between `"np"` to return a NumPy `np.ndarray` or
  `"pt"` to return a PyTorch `torch.Tensor` object. Set to `"latent"` to return the latent diffusion
  model (LDM) output.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated audio.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableAudioPipeline.__call__.example">

Examples:
```py
>>> import scipy
>>> import torch
>>> import soundfile as sf
>>> from diffusers import StableAudioPipeline

>>> repo_id = "stabilityai/stable-audio-open-1.0"
>>> pipe = StableAudioPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> # define the prompts
>>> prompt = "The sound of a hammer hitting a wooden surface."
>>> negative_prompt = "Low quality."

>>> # set the seed for generator
>>> generator = torch.Generator("cuda").manual_seed(0)

>>> # run the generation
>>> audio = pipe(
...     prompt,
...     negative_prompt=negative_prompt,
...     num_inference_steps=200,
...     audio_end_in_s=10.0,
...     num_waveforms_per_prompt=3,
...     generator=generator,
... ).audios

>>> output = audio[0].T.float().cpu().numpy()
>>> sf.write("hammer.wav", output, pipe.vae.sampling_rate)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableAudioPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py#L142</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableAudioPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py#L128</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_audio.md" />

### Pipelines
https://huggingface.co/docs/diffusers/main/api/pipelines/overview.md

# Pipelines

Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components.

All pipelines are built from the base [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)) loaded with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) are automatically detected and the pipeline components are loaded and passed to the `__init__` function of the pipeline.

> [!WARNING]
> You shouldn't use the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) class for training. Individual components (for example, [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) and [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead.
>
> <br>
>
> Pipelines do not offer any training functionality. You'll notice PyTorch's autograd is disabled by decorating the `__call__()` method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should not be used for training. If you're interested in training, please take a look at the [Training](../../training/overview) guides instead!

The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper.

| Pipeline | Tasks |
|---|---|
| [aMUSEd](amused) | text2image |
| [AnimateDiff](animatediff) | text2video |
| [Attend-and-Excite](attend_and_excite) | text2image |
| [AudioLDM](audioldm) | text2audio |
| [AudioLDM2](audioldm2) | text2audio |
| [AuraFlow](aura_flow) | text2image |
| [BLIP Diffusion](blip_diffusion) | text2image |
| [Bria 3.2](bria_3_2) | text2image |
| [CogVideoX](cogvideox) | text2video |
| [Consistency Models](consistency_models) | unconditional image generation |
| [ControlNet](controlnet) | text2image, image2image, inpainting |
| [ControlNet with Flux.1](controlnet_flux) | text2image |
| [ControlNet with Hunyuan-DiT](controlnet_hunyuandit) | text2image |
| [ControlNet with Stable Diffusion 3](controlnet_sd3) | text2image |
| [ControlNet with Stable Diffusion XL](controlnet_sdxl) | text2image |
| [ControlNet-XS](controlnetxs) | text2image |
| [ControlNet-XS with Stable Diffusion XL](controlnetxs_sdxl) | text2image |
| [Dance Diffusion](dance_diffusion) | unconditional audio generation |
| [DDIM](ddim) | unconditional image generation |
| [DDPM](ddpm) | unconditional image generation |
| [DeepFloyd IF](deepfloyd_if) | text2image, image2image, inpainting, super-resolution |
| [DiffEdit](diffedit) | inpainting |
| [DiT](dit) | text2image |
| [Flux](flux) | text2image |
| [Hunyuan-DiT](hunyuandit) | text2image |
| [I2VGen-XL](i2vgenxl) | image2video |
| [InstructPix2Pix](pix2pix) | image editing |
| [Kandinsky 2.1](kandinsky) | text2image, image2image, inpainting, interpolation |
| [Kandinsky 2.2](kandinsky_v22) | text2image, image2image, inpainting |
| [Kandinsky 3](kandinsky3) | text2image, image2image |
| [Kolors](kolors) | text2image |
| [Latent Consistency Models](latent_consistency_models) | text2image |
| [Latent Diffusion](latent_diffusion) | text2image, super-resolution |
| [Latte](latte) | text2image |
| [LEDITS++](ledits_pp) | image editing |
| [Lumina-T2X](lumina) | text2image |
| [Marigold](marigold) | depth-estimation, normals-estimation, intrinsic-decomposition |
| [MultiDiffusion](panorama) | text2image |
| [MusicLDM](musicldm) | text2audio |
| [PAG](pag) | text2image |
| [Paint by Example](paint_by_example) | inpainting |
| [PIA](pia) | image2video |
| [PixArt-α](pixart) | text2image |
| [PixArt-Σ](pixart_sigma) | text2image |
| [Self-Attention Guidance](self_attention_guidance) | text2image |
| [Semantic Guidance](semantic_stable_diffusion) | text2image |
| [Shap-E](shap_e) | text-to-3D, image-to-3D |
| [Stable Audio](stable_audio) | text2audio |
| [Stable Cascade](stable_cascade) | text2image |
| [Stable Diffusion](stable_diffusion/overview) | text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution |
| [Stable Diffusion XL](stable_diffusion/stable_diffusion_xl) | text2image, image2image, inpainting |
| [Stable Diffusion XL Turbo](stable_diffusion/sdxl_turbo) | text2image, image2image, inpainting |
| [Stable unCLIP](stable_unclip) | text2image, image variation |
| [T2I-Adapter](stable_diffusion/adapter) | text2image |
| [Text2Video](text_to_video) | text2video, video2video |
| [Text2Video-Zero](text_to_video_zero) | text2video |
| [unCLIP](unclip) | text2image, image variation |
| [UniDiffuser](unidiffuser) | text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation |
| [Value-guided planning](value_guided_sampling) | value guided sampling |
| [Wuerstchen](wuerstchen) | text2image |
| [VisualCloze](visualcloze) | text2image, image2image, subject driven generation, inpainting, style transfer, image restoration, image editing, [depth,normal,edge,pose]2image, [depth,normal,edge,pose]-estimation, virtual try-on, image relighting |

## DiffusionPipeline[[diffusers.DiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DiffusionPipeline</name><anchor>diffusers.DiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L181</source><parameters>[]</parameters></docstring>

Base class for all pipelines.

[DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) stores all components (models, schedulers, and processors) for diffusion pipelines and
provides methods for loading, downloading and saving models. It also includes methods to:

- move all PyTorch modules to the device of your choice
- enable/disable the progress bar for the denoising iteration

Class attributes:

- **config_name** (`str`) -- The configuration filename that stores the class and module names of all the
  diffusion pipeline's components.
- **_optional_components** (`List[str]`) -- List of all optional components that don't have to be passed to the
  pipeline to function (should be overridden by subclasses).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.DiffusionPipeline.__call__</anchor><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Call self as a function.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>device</name><anchor>diffusers.DiffusionPipeline.device</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L562</source><parameters>[]</parameters><rettype>`torch.device`</rettype><retdesc>The torch device on which the pipeline is located.</retdesc></docstring>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to</name><anchor>diffusers.DiffusionPipeline.to</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L370</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **dtype** (`torch.dtype`, *optional*) --
  Returns a pipeline with the specified
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- **device** (`torch.Device`, *optional*) --
  Returns a pipeline with the specified
  [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- **silence_dtype_warnings** (`str`, *optional*, defaults to `False`) --
  Whether to omit warnings if the target `dtype` is not compatible with the target `device`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline)</rettype><retdesc>The pipeline converted to specified `dtype` and/or `dtype`.</retdesc></docstring>

Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of `self.to(*args, **kwargs).`

> [!TIP] > If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is.
Otherwise, > the returned pipeline is a copy of self with the desired torch.dtype and torch.device.


Here are the ways to call `to`:

- `to(dtype, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- `to(device, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
  [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- `to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the
  specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) and
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>components</name><anchor>diffusers.DiffusionPipeline.components</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1857</source><parameters>[]</parameters></docstring>

The `self.components` property can be useful to run different pipelines with the same weights and
configurations without reallocating additional memory.

Returns (`dict`):
A dictionary containing all the modules needed to initialize the pipeline.

<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.components.example">

Examples:

```py
>>> from diffusers import (
...     StableDiffusionPipeline,
...     StableDiffusionImg2ImgPipeline,
...     StableDiffusionInpaintPipeline,
... )

>>> text2img = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.DiffusionPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.DiffusionPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>download</name><anchor>diffusers.DiffusionPipeline.download</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1473</source><parameters>[{"name": "pretrained_model_name", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name** (`str` or `os.PathLike`, *optional*) --
  A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
  hosted on the Hub.
- **custom_pipeline** (`str`, *optional*) --
  Can be either:

  - A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained
    pipeline hosted on the Hub. The repository must contain a file called `pipeline.py` that defines
    the custom pipeline.

  - A string, the *file name* of a community pipeline hosted on GitHub under
    [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file
    names must match the file name and not the pipeline script (`clip_guided_stable_diffusion`
    instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the
    current `main` branch of GitHub.

  - A path to a *directory* (`./my_pipeline_directory/`) containing a custom pipeline. The directory
    must contain a file called `pipeline.py` that defines the custom pipeline.

  > [!WARNING] > 🧪 This is an experimental feature and may change in the future.

  For more information on how to load and create custom pipelines, take a look at [How to contribute a
  community pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline).

- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **custom_revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
  `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a
  custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **variant** (`str`, *optional*) --
  Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.
- **dduf_file(`str`,** *optional*) --
  Load weights from the specified DDUF file.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the safetensors weights are downloaded if they're available **and** if the
  safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors
  weights. If set to `False`, safetensors weights are not loaded.
- **use_onnx** (`bool`, *optional*, defaults to `False`) --
  If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights
  will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is
  `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending
  with `.onnx` and `.pb`.
- **trust_remote_code** (`bool`, *optional*, defaults to `False`) --
  Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This
  option should only be set to `True` for repositories you trust and in which you have read the code, as
  it will execute code present on the Hub on your local machine.</paramsdesc><paramgroups>0</paramgroups><rettype>`os.PathLike`</rettype><retdesc>A path to the downloaded pipeline.</retdesc></docstring>

Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights.







> [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.DiffusionPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_group_offload</name><anchor>diffusers.DiffusionPipeline.enable_group_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1333</source><parameters>[{"name": "onload_device", "val": ": device"}, {"name": "offload_device", "val": ": device = device(type='cpu')"}, {"name": "offload_type", "val": ": str = 'block_level'"}, {"name": "num_blocks_per_group", "val": ": typing.Optional[int] = None"}, {"name": "non_blocking", "val": ": bool = False"}, {"name": "use_stream", "val": ": bool = False"}, {"name": "record_stream", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "offload_to_disk_path", "val": ": typing.Optional[str] = None"}, {"name": "exclude_modules", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}]</parameters><paramsdesc>- **onload_device** (`torch.device`) --
  The device to which the group of modules are onloaded.
- **offload_device** (`torch.device`, defaults to `torch.device("cpu")`) --
  The device to which the group of modules are offloaded. This should typically be the CPU. Default is
  CPU.
- **offload_type** (`str` or `GroupOffloadingType`, defaults to "block_level") --
  The type of offloading to be applied. Can be one of "block_level" or "leaf_level". Default is
  "block_level".
- **offload_to_disk_path** (`str`, *optional*, defaults to `None`) --
  The path to the directory where parameters will be offloaded. Setting this option can be useful in
  limited RAM environment settings where a reasonable speed-memory trade-off is desired.
- **num_blocks_per_group** (`int`, *optional*) --
  The number of blocks per group when using offload_type="block_level". This is required when using
  offload_type="block_level".
- **non_blocking** (`bool`, defaults to `False`) --
  If True, offloading and onloading is done with non-blocking data transfer.
- **use_stream** (`bool`, defaults to `False`) --
  If True, offloading and onloading is done asynchronously using a CUDA stream. This can be useful for
  overlapping computation and data transfer.
- **record_stream** (`bool`, defaults to `False`) -- When enabled with `use_stream`, it marks the current tensor
  as having been used by this stream. It is faster at the expense of slightly more memory usage. Refer to
  the [PyTorch official docs](https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html)
  more details.
- **low_cpu_mem_usage** (`bool`, defaults to `False`) --
  If True, the CPU memory usage is minimized by pinning tensors on-the-fly instead of pre-pinning them.
  This option only matters when using streamed CPU offloading (i.e. `use_stream=True`). This can be
  useful when the CPU memory is a bottleneck but may counteract the benefits of using streams.
- **exclude_modules** (`Union[str, List[str]]`, defaults to `None`) -- List of modules to exclude from offloading.</paramsdesc><paramgroups>0</paramgroups></docstring>

Applies group offloading to the internal layers of a torch.nn.Module. To understand what group offloading is,
and where it is beneficial, we need to first provide some context on how other supported offloading methods
work.

Typically, offloading is done at two levels:
- Module-level: In Diffusers, this can be enabled using the `ModelMixin::enable_model_cpu_offload()` method. It
works by offloading each component of a pipeline to the CPU for storage, and onloading to the accelerator
device when needed for computation. This method is more memory-efficient than keeping all components on the
accelerator, but the memory requirements are still quite high. For this method to work, one needs memory
equivalent to size of the model in runtime dtype + size of largest intermediate activation tensors to be able
to complete the forward pass.
- Leaf-level: In Diffusers, this can be enabled using the `ModelMixin::enable_sequential_cpu_offload()` method.
  It
works by offloading the lowest leaf-level parameters of the computation graph to the CPU for storage, and
onloading only the leafs to the accelerator device for computation. This uses the lowest amount of accelerator
memory, but can be slower due to the excessive number of device synchronizations.

Group offloading is a middle ground between the two methods. It works by offloading groups of internal layers,
(either `torch.nn.ModuleList` or `torch.nn.Sequential`). This method uses lower memory than module-level
offloading. It is also faster than leaf-level/sequential offloading, as the number of device synchronizations
is reduced.

Another supported feature (for CUDA devices with support for asynchronous data transfer streams) is the ability
to overlap data transfer and computation to reduce the overall execution time compared to sequential
offloading. This is enabled using layer prefetching with streams, i.e., the layer that is to be executed next
starts onloading to the accelerator device while the current layer is being executed - this increases the
memory requirements slightly. Note that this implementation also supports leaf-level offloading but can be made
much faster when using streams.



<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.enable_group_offload.example">

Example:
```python
>>> from diffusers import DiffusionPipeline
>>> import torch

>>> pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)

>>> pipe.enable_group_offload(
...     onload_device=torch.device("cuda"),
...     offload_device=torch.device("cpu"),
...     offload_type="leaf_level",
...     use_stream=True,
... )
>>> image = pipe("a beautiful sunset").images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.DiffusionPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1150</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters><paramsdesc>- **gpu_id** (`int`, *optional*) --
  The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.
- **device** (`torch.Device` or `str`, *optional*, defaults to None) --
  The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
  automatically detect the available accelerator and use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.DiffusionPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1266</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters><paramsdesc>- **gpu_id** (`int`, *optional*) --
  The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.
- **device** (`torch.Device` or `str`, *optional*, defaults to None) --
  The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
  automatically detect the available accelerator and use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state
dicts of all `torch.nn.Module` components (except those in `self._exclude_from_cpu_offload`) are saved to CPU
and then moved to `torch.device('meta')` and loaded to accelerator only when their specific submodule has its
`forward` method called. Offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pipe</name><anchor>diffusers.DiffusionPipeline.from_pipe</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2031</source><parameters>[{"name": "pipeline", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pipeline** (`DiffusionPipeline`) --
  The pipeline from which to create a new pipeline.</paramsdesc><paramgroups>0</paramgroups><rettype>`DiffusionPipeline`</rettype><retdesc>A new pipeline with the same weights and configurations as `pipeline`.</retdesc></docstring>

Create a new pipeline from a given pipeline. This method is useful to create a new pipeline from the existing
pipeline components without reallocating additional memory.







<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.from_pipe.example">

Examples:

```py
>>> from diffusers import StableDiffusionPipeline, StableDiffusionSAGPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
>>> new_pipe = StableDiffusionSAGPipeline.from_pipe(pipe)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.DiffusionPipeline.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L592</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights
    saved using
  [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained).
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing a dduf file
- **torch_dtype** (`torch.dtype` or `dict[str, Union[str, torch.dtype]]`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype. To load submodels with
  different dtype pass a `dict` (for example `{'transformer': torch.bfloat16, 'vae': torch.float16}`).
  Set the default dtype for unspecified components with `default` (for example `{'transformer':
  torch.bfloat16, 'default': torch.float16}`). If a component is not specified and no default is set,
  `torch.float32` is used.
- **custom_pipeline** (`str`, *optional*) --

  > [!WARNING] > 🧪 This is an experimental feature and may change in the future.

  Can be either:

  - A string, the *repo id* (for example `hf-internal-testing/diffusers-dummy-pipeline`) of a custom
    pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines
    the custom pipeline.
  - A string, the *file name* of a community pipeline hosted on GitHub under
    [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file
    names must match the file name and not the pipeline script (`clip_guided_stable_diffusion`
    instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the
    current main branch of GitHub.
  - A path to a directory (`./my_pipeline_directory/`) containing a custom pipeline. The directory
    must contain a file called `pipeline.py` that defines the custom pipeline.

  For more information on how to load and create custom pipelines, please have a look at [Loading and
  Adding Custom
  Pipelines](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview)
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **custom_revision** (`str`, *optional*) --
  The specific model version to use. It can be a branch name, a tag name, or a commit id similar to
  `revision` when loading a custom pipeline from the Hub. Defaults to the latest stable 🤗 Diffusers
  version.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`str`, *optional*) --
  Strategy that dictates how the different components of a pipeline should be placed on available
  devices. Currently, only "balanced" `device_map` is supported. Check out
  [this](https://huggingface.co/docs/diffusers/main/en/tutorials/inference_with_big_models#device-placement)
  to know more.
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if device_map contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the safetensors weights are downloaded if they're available **and** if the
  safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors
  weights. If set to `False`, safetensors weights are not loaded.
- **use_onnx** (`bool`, *optional*, defaults to `None`) --
  If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights
  will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is
  `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending
  with `.onnx` and `.pb`.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.
- **variant** (`str`, *optional*) --
  Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.
- **dduf_file(`str`,** *optional*) --
  Load weights from the specified dduf file.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights.

The pipeline is set in evaluation mode (`model.eval()`) by default.

<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.from_pretrained.example">

If you get the error message below, you need to finetune the weights for your downstream task:

```
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>



> [!TIP] > To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in
with `hf > auth login`.

<ExampleCodeBlock anchor="diffusers.DiffusionPipeline.from_pretrained.example-2">

Examples:

```py
>>> from diffusers import DiffusionPipeline

>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")

>>> # Download pipeline that requires an authorization token
>>> # For more information on access tokens, please refer to this section
>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens)
>>> pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")

>>> # Use a different scheduler
>>> from diffusers import LMSDiscreteScheduler

>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.scheduler = scheduler
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>maybe_free_model_hooks</name><anchor>diffusers.DiffusionPipeline.maybe_free_model_hooks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1243</source><parameters>[]</parameters></docstring>

Method that performs the following:
- Offloads all components.
- Removes all model hooks that were added when using `enable_model_cpu_offload`, and then applies them again.
  In case the model has not been offloaded, this function is a no-op.
- Resets stateful diffusers hooks of denoiser components if they were added with
  `register_hook()`.

Make sure to add this function to the end of the `__call__` function of your pipeline so that it functions
correctly when applying `enable_model_cpu_offload`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>numpy_to_pil</name><anchor>diffusers.DiffusionPipeline.numpy_to_pil</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1895</source><parameters>[{"name": "images", "val": ""}]</parameters></docstring>

Convert a NumPy image or a batch of images to a PIL image.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>remove_all_hooks</name><anchor>diffusers.DiffusionPipeline.remove_all_hooks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1141</source><parameters>[]</parameters></docstring>

Removes all hooks that were added when using `enable_sequential_cpu_offload` or `enable_model_cpu_offload`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reset_device_map</name><anchor>diffusers.DiffusionPipeline.reset_device_map</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1460</source><parameters>[]</parameters></docstring>

Resets the device maps (if any) to None.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>diffusers.DiffusionPipeline.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L237</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "max_shard_size", "val": ": typing.Union[int, str, NoneType] = None"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save a pipeline to. Will be created if it doesn't exist.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **variant** (`str`, *optional*) --
  If specified, weights are saved in the format `pytorch_model.<variant>.bin`.
- **max_shard_size** (`int` or `str`, defaults to `None`) --
  The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size
  lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5GB"`).
  If expressed as an integer, the unit is bytes. Note that this limit will be decreased after a certain
  period of time (starting from Oct 2024) to allow users to upgrade to the latest version of `diffusers`.
  This is to establish a common default size for this argument across different libraries in the Hugging
  Face ecosystem (`transformers`, and `accelerate`, for example).
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
  repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
  namespace).

- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its
class implements both a save and loading method. The pipeline is easily reloaded using the
[from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) class method.




</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.StableDiffusionMixin.enable_freeu</name><anchor>diffusers.StableDiffusionMixin.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2233</source><parameters>[{"name": "s1", "val": ": float"}, {"name": "s2", "val": ": float"}, {"name": "b1", "val": ": float"}, {"name": "b2", "val": ": float"}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism as in https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stages where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.StableDiffusionMixin.disable_freeu</name><anchor>diffusers.StableDiffusionMixin.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2255</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism if enabled.

</div>

## PushToHubMixin[[diffusers.utils.PushToHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.utils.PushToHubMixin</name><anchor>diffusers.utils.PushToHubMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L464</source><parameters>[]</parameters></docstring>

A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>diffusers.utils.PushToHubMixin.push_to_hub</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L499</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": bool = False"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository you want to push your model, scheduler, or pipeline files to. It should
  contain your organization name when pushing to an organization. `repo_id` can also be a path to a local
  directory.
- **commit_message** (`str`, *optional*) --
  Message to commit while pushing. Default to `"Upload {object}"`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the
  organization's default is private. This value is ignored if the repo already exists.
- **token** (`str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. The token generated when running `hf
  auth login` (stored in `~/.huggingface`).
- **create_pr** (`bool`, *optional*, defaults to `False`) --
  Whether or not to create a PR with the uploaded files or directly commit.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether or not to convert the model weights to the `safetensors` format.
- **variant** (`str`, *optional*) --
  If specified, weights are saved in the format `pytorch_model.<variant>.bin`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub.



<ExampleCodeBlock anchor="diffusers.utils.PushToHubMixin.push_to_hub.example">

Examples:

```python
from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet")

# Push the `unet` to your namespace with the name "my-finetuned-unet".
unet.push_to_hub("my-finetuned-unet")

# Push the `unet` to an organization with the name "my-finetuned-unet".
unet.push_to_hub("your-org/my-finetuned-unet")
```

</ExampleCodeBlock>


</div></div>

## Callbacks[[diffusers.callbacks.PipelineCallback]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.PipelineCallback</name><anchor>diffusers.callbacks.PipelineCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L7</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Base class for all the official callbacks used in a pipeline. This class provides a structure for implementing
custom callbacks and ensures that all callbacks have a consistent interface.

Please implement the following:
`tensor_inputs`: This should return a list of tensor inputs specific to your callback. You will only be able to
include
variables listed in the `._callback_tensor_inputs` attribute of your pipeline class.
`callback_fn`: This method defines the core functionality of your callback.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.SDCFGCutoffCallback</name><anchor>diffusers.callbacks.SDCFGCutoffCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L69</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Callback function for Stable Diffusion Pipelines. After certain number of steps (set by `cutoff_step_ratio` or
`cutoff_step_index`), this callback will disable the CFG.

Note: This callback mutates the pipeline by changing the `_guidance_scale` attribute to 0.0 after the cutoff step.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.SDXLCFGCutoffCallback</name><anchor>diffusers.callbacks.SDXLCFGCutoffCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L98</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Callback function for the base Stable Diffusion XL Pipelines. After certain number of steps (set by
`cutoff_step_ratio` or `cutoff_step_index`), this callback will disable the CFG.

Note: This callback mutates the pipeline by changing the `_guidance_scale` attribute to 0.0 after the cutoff step.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.SDXLControlnetCFGCutoffCallback</name><anchor>diffusers.callbacks.SDXLControlnetCFGCutoffCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L140</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Callback function for the Controlnet Stable Diffusion XL Pipelines. After certain number of steps (set by
`cutoff_step_ratio` or `cutoff_step_index`), this callback will disable the CFG.

Note: This callback mutates the pipeline by changing the `_guidance_scale` attribute to 0.0 after the cutoff step.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.IPAdapterScaleCutoffCallback</name><anchor>diffusers.callbacks.IPAdapterScaleCutoffCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L188</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Callback function for any pipeline that inherits `IPAdapterMixin`. After certain number of steps (set by
`cutoff_step_ratio` or `cutoff_step_index`), this callback will set the IP Adapter scale to `0.0`.

Note: This callback mutates the IP Adapter attention processors by setting the scale to 0.0 after the cutoff step.


</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.callbacks.SD3CFGCutoffCallback</name><anchor>diffusers.callbacks.SD3CFGCutoffCallback</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/callbacks.py#L212</source><parameters>[{"name": "cutoff_step_ratio", "val": " = 1.0"}, {"name": "cutoff_step_index", "val": " = None"}]</parameters></docstring>

Callback function for Stable Diffusion 3 Pipelines. After certain number of steps (set by `cutoff_step_ratio` or
`cutoff_step_index`), this callback will disable the CFG.

Note: This callback mutates the pipeline by changing the `_guidance_scale` attribute to 0.0 after the cutoff step.


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/overview.md" />

### Kandinsky 2.1
https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky.md

# Kandinsky 2.1

Kandinsky 2.1 is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Vladimir Arkhipkin](https://github.com/oriBetelgeuse), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey), and [Denis Dimitrov](https://github.com/denndimitrov).

The description from it's GitHub page is:

*Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.*

The original codebase can be found at [ai-forever/Kandinsky-2](https://github.com/ai-forever/Kandinsky-2).

> [!TIP]
> Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## KandinskyPriorPipeline[[diffusers.KandinskyPriorPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyPriorPipeline</name><anchor>diffusers.KandinskyPriorPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py#L137</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "scheduler", "val": ": UnCLIPScheduler"}, {"name": "image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating image prior for Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyPriorPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py#L406</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **output_type** (`str`, *optional*, defaults to `"pt"`) --
  The output format of the generate image. Choose between: `"np"` (`np.array`) or `"pt"`
  (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyPriorPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline
>>> import torch

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior")
>>> pipe_prior.to("cuda")

>>> prompt = "red cat, 4k photo"
>>> out = pipe_prior(prompt)
>>> image_emb = out.image_embeds
>>> negative_image_emb = out.negative_image_embeds

>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1")
>>> pipe.to("cuda")

>>> image = pipe(
...     prompt,
...     image_embeds=image_emb,
...     negative_image_embeds=negative_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=100,
... ).images

>>> image[0].save("cat.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>interpolate</name><anchor>diffusers.KandinskyPriorPipeline.interpolate</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py#L181</source><parameters>[{"name": "images_and_prompts", "val": ": typing.List[typing.Union[str, PIL.Image.Image, torch.Tensor]]"}, {"name": "weights", "val": ": typing.List[float]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prior_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "device", "val": " = None"}]</parameters><paramsdesc>- **images_and_prompts** (`List[Union[str, PIL.Image.Image, torch.Tensor]]`) --
  list of prompts and images to guide the image generation.
- **weights** -- (`List[float]`):
  list of weights for each condition in `images_and_prompts`
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **negative_prior_prompt** (`str`, *optional*) --
  The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when using the prior pipeline for interpolation.



<ExampleCodeBlock anchor="diffusers.KandinskyPriorPipeline.interpolate.example">

Examples:
```py
>>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline
>>> from diffusers.utils import load_image
>>> import PIL

>>> import torch
>>> from torchvision import transforms

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")

>>> img1 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... )

>>> img2 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/starry_night.jpeg"
... )

>>> images_texts = ["a cat", img1, img2]
>>> weights = [0.3, 0.3, 0.4]
>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights)

>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
>>> pipe.to("cuda")

>>> image = pipe(
...     "",
...     image_embeds=image_emb,
...     negative_image_embeds=zero_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=150,
... ).images[0]

>>> image.save("starry_cat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyPipeline[[diffusers.KandinskyPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyPipeline</name><anchor>diffusers.KandinskyPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py#L81</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler]"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py#L236</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline
>>> import torch

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior")
>>> pipe_prior.to("cuda")

>>> prompt = "red cat, 4k photo"
>>> out = pipe_prior(prompt)
>>> image_emb = out.image_embeds
>>> negative_image_emb = out.negative_image_embeds

>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1")
>>> pipe.to("cuda")

>>> image = pipe(
...     prompt,
...     image_embeds=image_emb,
...     negative_image_embeds=negative_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=100,
... ).images

>>> image[0].save("cat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyCombinedPipeline[[diffusers.KandinskyCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyCombinedPipeline</name><anchor>diffusers.KandinskyCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L113</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler]"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for text-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L215</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyCombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForText2Image
import torch

pipe = AutoPipelineForText2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"

image = pipe(prompt=prompt, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L196</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models (`unet`, `text_encoder`, `vae`, and `safety checker` state dicts) to CPU using 🤗
Accelerate, significantly reducing memory usage. Models are moved to a `torch.device('meta')` and loaded on a
GPU only when their specific submodule's `forward` method is called. Offloading happens on a submodule basis.
Memory savings are higher than using `enable_model_cpu_offload`, but performance is lower.


</div></div>

## KandinskyImg2ImgPipeline[[diffusers.KandinskyImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyImg2ImgPipeline</name><anchor>diffusers.KandinskyImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L93</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "movq", "val": ": VQModel"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDIMScheduler"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ image encoder and decoder</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L297</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "image_embeds", "val": ": Tensor"}, {"name": "negative_image_embeds", "val": ": Tensor"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyImg2ImgPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline
>>> from diffusers.utils import load_image
>>> import torch

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")

>>> prompt = "A red cartoon frog, 4k"
>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False)

>>> pipe = KandinskyImg2ImgPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> init_image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/frog.png"
... )

>>> image = pipe(
...     prompt,
...     image=init_image,
...     image_embeds=image_emb,
...     negative_image_embeds=zero_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=100,
...     strength=0.2,
... ).images

>>> image[0].save("red_frog.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyImg2ImgCombinedPipeline[[diffusers.KandinskyImg2ImgCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyImg2ImgCombinedPipeline</name><anchor>diffusers.KandinskyImg2ImgCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L331</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler]"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for image-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyImg2ImgCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L434</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyImg2ImgCombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForImage2Image
import torch
import requests
from io import BytesIO
from PIL import Image
import os

pipe = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
image.thumbnail((768, 768))

image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyImg2ImgCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L414</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
`torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
Note that offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.


</div></div>

## KandinskyInpaintPipeline[[diffusers.KandinskyInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyInpaintPipeline</name><anchor>diffusers.KandinskyInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py#L245</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "movq", "val": ": VQModel"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDIMScheduler"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ image encoder and decoder</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image inpainting using Kandinsky2.1

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py#L401</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image]"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray]"}, {"name": "image_embeds", "val": ": Tensor"}, {"name": "negative_image_embeds", "val": ": Tensor"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image` or `np.ndarray`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **mask_image** (`PIL.Image.Image`,`torch.Tensor` or `np.ndarray`) --
  `Image`, or a tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the
  image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the
  expected shape would be either `(B, 1, H, W,)`, `(B, H, W)`, `(1, H, W)` or `(H, W)` If image is an PIL
  image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it
  will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected
  shape is `(H, W)`.
- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyInpaintPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline
>>> from diffusers.utils import load_image
>>> import torch
>>> import numpy as np

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")

>>> prompt = "a hat"
>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False)

>>> pipe = KandinskyInpaintPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> init_image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... )

>>> mask = np.zeros((768, 768), dtype=np.float32)
>>> mask[:250, 250:-250] = 1

>>> out = pipe(
...     prompt,
...     image=init_image,
...     mask_image=mask,
...     image_embeds=image_emb,
...     negative_image_embeds=zero_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=50,
... )

>>> image = out.images[0]
>>> image.save("cat_with_hat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyInpaintCombinedPipeline[[diffusers.KandinskyInpaintCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyInpaintCombinedPipeline</name><anchor>diffusers.KandinskyInpaintCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L572</source><parameters>[{"name": "text_encoder", "val": ": MultilingualCLIP"}, {"name": "tokenizer", "val": ": XLMRobertaTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler]"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **text_encoder** (`MultilingualCLIP`) --
  Frozen text-encoder.
- **tokenizer** (`XLMRobertaTokenizer`) --
  Tokenizer of class
- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyInpaintCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L675</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **mask_image** (`np.array`) --
  Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
  black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single
  channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3,
  so the expected shape would be `(B, H, W, 1)`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyInpaintCombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
import numpy as np

pipe = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

original_image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)

mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head
mask[:250, 250:-250] = 1

image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyInpaintCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py#L655</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
`torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
Note that offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kandinsky.md" />

### MusicLDM
https://huggingface.co/docs/diffusers/main/api/pipelines/musicldm.md

# MusicLDM

MusicLDM was proposed in [MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies](https://huggingface.co/papers/2308.01546) by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
MusicLDM takes a text prompt as input and predicts the corresponding music sample.

Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview) and [AudioLDM](https://huggingface.co/docs/diffusers/api/pipelines/audioldm),
MusicLDM is a text-to-music _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap)
latents.

MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style.

The abstract of the paper is the following:

*Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music.*

This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi).

## Tips

When constructing a prompt, keep in mind:

* Descriptive prompt inputs work best; use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific where possible (e.g. "melodic techno with a fast beat and synths" works better than "techno").
* Using a *negative prompt* can significantly improve the quality of the generated audio. Try using a negative prompt of "low quality, average quality".

During inference:

* The _quality_ of the generated audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.
* The _length_ of the generated audio sample can be controlled by varying the `audio_length_in_s` argument.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## MusicLDMPipeline[[diffusers.MusicLDMPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MusicLDMPipeline</name><anchor>diffusers.MusicLDMPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/musicldm/pipeline_musicldm.py#L79</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": typing.Union[transformers.models.clap.modeling_clap.ClapTextModelWithProjection, transformers.models.clap.modeling_clap.ClapModel]"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.roberta.tokenization_roberta.RobertaTokenizer, transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast]"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clap.feature_extraction_clap.ClapFeatureExtractor]"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "vocoder", "val": ": SpeechT5HifiGan"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.MusicLDMPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/musicldm/pipeline_musicldm.py#L433</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": int = 200"}, {"name": "guidance_scale", "val": ": float = 2.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_waveforms_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide audio generation. If not defined, you need to pass `prompt_embeds`.
- **audio_length_in_s** (`int`, *optional*, defaults to 10.24) --
  The length of the generated audio sample in seconds.
- **num_inference_steps** (`int`, *optional*, defaults to 200) --
  The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 2.0) --
  A higher guidance scale value encourages the model to generate audio that is closely linked to the text
  `prompt` at the expense of lower sound quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_waveforms_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of waveforms to generate per prompt. If `num_waveforms_per_prompt > 1`, the text encoding
  model is a joint text-audio model ([ClapModel](https://huggingface.co/docs/transformers/main/en/model_doc/clap#transformers.ClapModel)), and the tokenizer is a
  `[~transformers.ClapProcessor]`, then automatic scoring will be performed between the generated outputs
  and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text
  input in the joint text-audio embedding space.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated audio. Choose between `"np"` to return a NumPy `np.ndarray` or
  `"pt"` to return a PyTorch `torch.Tensor` object. Set to `"latent"` to return the latent diffusion
  model (LDM) output.</paramsdesc><paramgroups>0</paramgroups><rettype>[AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated audio.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.MusicLDMPipeline.__call__.example">

Examples:
```py
>>> from diffusers import MusicLDMPipeline
>>> import torch
>>> import scipy

>>> repo_id = "ucsd-reach/musicldm"
>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]

>>> # save the audio sample as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.MusicLDMPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/musicldm/pipeline_musicldm.py#L397</source><parameters>[{"name": "gpu_id", "val": " = 0"}]</parameters></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/musicldm.md" />

### DDIM
https://huggingface.co/docs/diffusers/main/api/pipelines/ddim.md

# DDIM

[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.

The abstract from the paper is:

*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*

The original codebase can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim).

## DDIMPipeline[[diffusers.DDIMPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DDIMPipeline</name><anchor>diffusers.DDIMPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py#L34</source><parameters>[{"name": "unet", "val": ": UNet2DModel"}, {"name": "scheduler", "val": ": DDIMScheduler"}]</parameters><paramsdesc>- **unet** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  A `UNet2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
  [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler), or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.DDIMPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py#L59</source><parameters>[{"name": "batch_size", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "use_clipped_model_output", "val": ": typing.Optional[bool] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **batch_size** (`int`, *optional*, defaults to 1) --
  The number of images to generate.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers. A value of `0`
  corresponds to DDIM and `1` corresponds to DDPM.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **use_clipped_model_output** (`bool`, *optional*, defaults to `None`) --
  If `True` or `False`, see documentation for [DDIMScheduler.step()](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler.step). If `None`, nothing is passed
  downstream to the scheduler (use `None` for schedulers which don't support this argument).
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.DDIMPipeline.__call__.example">

Example:

```py
>>> from diffusers import DDIMPipeline
>>> import PIL.Image
>>> import numpy as np

>>> # load model and scheduler
>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom")

>>> # run pipeline in inference (sample random noise and denoise)
>>> image = pipe(eta=0.0, num_inference_steps=50)

>>> # process image to PIL
>>> image_processed = image.cpu().permute(0, 2, 3, 1)
>>> image_processed = (image_processed + 1.0) * 127.5
>>> image_processed = image_processed.numpy().astype(np.uint8)
>>> image_pil = PIL.Image.fromarray(image_processed[0])

>>> # save image
>>> image_pil.save("test.png")
```

</ExampleCodeBlock>






</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/ddim.md" />

### Text-to-video
https://huggingface.co/docs/diffusers/main/api/pipelines/text_to_video.md

# Text-to-video

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[ModelScope Text-to-Video Technical Report](https://huggingface.co/papers/2308.06571) is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang.

The abstract from the paper is:

*This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary.*

You can find additional information about Text-to-Video on the [project page](https://modelscope.cn/models/damo/text-to-video-synthesis/summary), [original codebase](https://github.com/modelscope/modelscope/), and try it out in a [demo](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis). Official checkpoints can be found at [damo-vilab](https://huggingface.co/damo-vilab) and [cerspense](https://huggingface.co/cerspense).

## Usage example

### `text-to-video-ms-1.7b`

Let's start by generating a short video with the default length of 16 frames (2s at 8 fps):

```python
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to("cuda")

prompt = "Spiderman is surfing"
video_frames = pipe(prompt).frames[0]
video_path = export_to_video(video_frames)
video_path
```

Diffusers supports different optimization techniques to improve the latency
and memory footprint of a pipeline. Since videos are often more memory-heavy than images,
we can enable CPU offloading and VAE slicing to keep the memory footprint at bay.

Let's generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing:

```python
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.enable_model_cpu_offload()

# memory optimization
pipe.enable_vae_slicing()

prompt = "Darth Vader surfing a wave"
video_frames = pipe(prompt, num_frames=64).frames[0]
video_path = export_to_video(video_frames)
video_path
```

It just takes **7 GBs of GPU memory** to generate the 64 video frames using PyTorch 2.0, "fp16" precision and the techniques mentioned above.

We can also use a different scheduler easily, using the same method we'd use for Stable Diffusion:

```python
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

prompt = "Spiderman is surfing"
video_frames = pipe(prompt, num_inference_steps=25).frames[0]
video_path = export_to_video(video_frames)
video_path
```

Here are some sample outputs:

<table>
    <tr>
        <td><center>
        An astronaut riding a horse.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astr.gif"
            alt="An astronaut riding a horse."
            style="width: 300px;" />
        </center></td>
        <td ><center>
        Darth vader surfing in waves.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vader.gif"
            alt="Darth vader surfing in waves."
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

### `cerspense/zeroscope_v2_576w` & `cerspense/zeroscope_v2_XL`

Zeroscope are watermark-free model and have been trained on specific sizes such as `576x320` and `1024x576`.
One should first generate a video using the lower resolution checkpoint [`cerspense/zeroscope_v2_576w`](https://huggingface.co/cerspense/zeroscope_v2_576w) with [TextToVideoSDPipeline](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.TextToVideoSDPipeline),
which can then be upscaled using [VideoToVideoSDPipeline](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.VideoToVideoSDPipeline) and [`cerspense/zeroscope_v2_XL`](https://huggingface.co/cerspense/zeroscope_v2_XL).


```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video
from PIL import Image

pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()

# memory optimization
pipe.unet.enable_forward_chunking(chunk_size=1, dim=1)
pipe.enable_vae_slicing()

prompt = "Darth Vader surfing a wave"
video_frames = pipe(prompt, num_frames=24).frames[0]
video_path = export_to_video(video_frames)
video_path
```

Now the video can be upscaled:

```py
pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

# memory optimization
pipe.unet.enable_forward_chunking(chunk_size=1, dim=1)
pipe.enable_vae_slicing()

video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]

video_frames = pipe(prompt, video=video, strength=0.6).frames[0]
video_path = export_to_video(video_frames)
video_path
```

Here are some sample outputs:

<table>
    <tr>
        <td ><center>
        Darth vader surfing in waves.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/darthvader_cerpense.gif"
            alt="Darth vader surfing in waves."
            style="width: 576px;" />
        </center></td>
    </tr>
</table>

## Tips

Video generation is memory-intensive and one way to reduce your memory usage is to set `enable_forward_chunking` on the pipeline's UNet so you don't run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient.

Check out the [Text or image-to-video](../../using-diffusers/text-img2vid) guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## TextToVideoSDPipeline[[diffusers.TextToVideoSDPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TextToVideoSDPipeline</name><anchor>diffusers.TextToVideoSDPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py#L70</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet3DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.TextToVideoSDPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py#L449</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_frames", "val": ": int = 16"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 9.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated video. Choose between `torch.Tensor` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.TextToVideoSDPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import TextToVideoSDPipeline
>>> from diffusers.utils import export_to_video

>>> pipe = TextToVideoSDPipeline.from_pretrained(
...     "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16"
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "Spiderman is surfing"
>>> video_frames = pipe(prompt).frames[0]
>>> video_path = export_to_video(video_frames)
>>> video_path
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.TextToVideoSDPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py#L159</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## VideoToVideoSDPipeline[[diffusers.VideoToVideoSDPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.VideoToVideoSDPipeline</name><anchor>diffusers.VideoToVideoSDPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py#L105</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet3DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.VideoToVideoSDPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py#L514</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "video", "val": ": typing.Union[typing.List[numpy.ndarray], torch.Tensor] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 15.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **video** (`List[np.ndarray]` or `torch.Tensor`) --
  `video` frames or tensor representing a video batch to be used as the starting point for the process.
  Can also accept video latents as `image`, if passing latents directly, it will not be encoded again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `video`. Must be between 0 and 1. `video` is used as a
  starting point, adding more noise to it the larger the `strength`. The number of denoising steps
  depends on the amount of noise initially added. When `strength` is 1, added noise is maximum and the
  denoising process runs for the full number of iterations specified in `num_inference_steps`. A value of
  1 essentially ignores `video`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in video generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated video. Choose between `torch.Tensor` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.VideoToVideoSDPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
>>> from diffusers.utils import export_to_video

>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe.to("cuda")

>>> prompt = "spiderman running in the desert"
>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames[0]
>>> # safe low-res video
>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4")

>>> # let's offload the text-to-image model
>>> pipe.to("cpu")

>>> # and load the image-to-image model
>>> pipe = DiffusionPipeline.from_pretrained(
...     "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15"
... )
>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe.enable_model_cpu_offload()

>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode
>>> pipe.vae.enable_slicing()

>>> # now let's upscale it
>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]

>>> # and denoise it
>>> video_frames = pipe(prompt, video=video, strength=0.6).frames[0]
>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4")
>>> video_path
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.VideoToVideoSDPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py#L194</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## TextToVideoSDPipelineOutput[[diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput</name><anchor>diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_output.py#L14</source><parameters>[{"name": "frames", "val": ": typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]]"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for text-to-video pipelines.



PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
`(batch_size, num_frames, channels, height, width)`


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/text_to_video.md" />

### Latent Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/latent_diffusion.md

# Latent Diffusion

Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.

The abstract from the paper is:

*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*

The original codebase can be found at [CompVis/latent-diffusion](https://github.com/CompVis/latent-diffusion).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## LDMTextToImagePipeline[[diffusers.LDMTextToImagePipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LDMTextToImagePipeline</name><anchor>diffusers.LDMTextToImagePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py#L40</source><parameters>[{"name": "vqvae", "val": ": typing.Union[diffusers.models.autoencoders.vq_model.VQModel, diffusers.models.autoencoders.autoencoder_kl.AutoencoderKL]"}, {"name": "bert", "val": ": PreTrainedModel"}, {"name": "tokenizer", "val": ": PreTrainedTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d.UNet2DModel, diffusers.models.unets.unet_2d_condition.UNet2DConditionModel]"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler]"}]</parameters><paramsdesc>- **vqvae** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  Vector-quantized (VQ) model to encode and decode images to and from latent representations.
- **bert** (`LDMBertModel`) --
  Text-encoder model based on `BERT`.
- **tokenizer** ([BertTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/bert#transformers.BertTokenizer)) --
  A `BertTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using latent diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LDMTextToImagePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py#L75</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 1.0"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 1.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LDMTextToImagePipeline.__call__.example">

Example:

```py
>>> from diffusers import DiffusionPipeline

>>> # load model and scheduler
>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")

>>> # run pipeline in inference (sample random noise and denoise)
>>> prompt = "A painting of a squirrel eating a burger"
>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images

>>> # save images
>>> for idx, image in enumerate(images):
...     image.save(f"squirrel-{idx}.png")
```

</ExampleCodeBlock>






</div></div>

## LDMSuperResolutionPipeline[[diffusers.LDMSuperResolutionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LDMSuperResolutionPipeline</name><anchor>diffusers.LDMSuperResolutionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py#L40</source><parameters>[{"name": "vqvae", "val": ": VQModel"}, {"name": "unet", "val": ": UNet2DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}]</parameters><paramsdesc>- **vqvae** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  Vector-quantized (VQ) model to encode and decode images to and from latent representations.
- **unet** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  A `UNet2DModel` to denoise the encoded image.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler),
  [EulerAncestralDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.EulerAncestralDiscreteScheduler), [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

A pipeline for image super-resolution using latent diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LDMSuperResolutionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py#L74</source><parameters>[{"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "batch_size", "val": ": typing.Optional[int] = 1"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 100"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image` or tensor representing an image batch to be used as the starting point for the process.
- **batch_size** (`int`, *optional*, defaults to 1) --
  Number of images to generate.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LDMSuperResolutionPipeline.__call__.example">

Example:

```py
>>> import requests
>>> from PIL import Image
>>> from io import BytesIO
>>> from diffusers import LDMSuperResolutionPipeline
>>> import torch

>>> # load model and scheduler
>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages")
>>> pipeline = pipeline.to("cuda")

>>> # let's download an  image
>>> url = (
...     "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png"
... )
>>> response = requests.get(url)
>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
>>> low_res_img = low_res_img.resize((128, 128))

>>> # run pipeline in inference (sample random noise and denoise)
>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0]
>>> # save image
>>> upscaled_image.save("ldm_generated_image.png")
```

</ExampleCodeBlock>






</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/latent_diffusion.md" />

### MultiDiffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/panorama.md

# MultiDiffusion

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation](https://huggingface.co/papers/2302.08113) is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel.

The abstract from the paper is:

*Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.*

You can find additional information about MultiDiffusion on the [project page](https://multidiffusion.github.io/), [original codebase](https://github.com/omerbt/MultiDiffusion), and try it out in a [demo](https://huggingface.co/spaces/weizmannscience/MultiDiffusion).

## Tips

While calling [StableDiffusionPanoramaPipeline](/docs/diffusers/main/en/api/pipelines/panorama#diffusers.StableDiffusionPanoramaPipeline), it's possible to specify the `view_batch_size` parameter to be > 1.
For some GPUs with high performance, this can speedup the generation process and increase VRAM usage.

To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default.

Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set `circular_padding=True`), the operation applies additional crops after the rightmost point of the image, allowing the model to "see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space.

For example, without circular padding, there is a stitching artifact (default):
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/indoor_%20no_circular_padding.png)

But with circular padding, the right and the left parts are matching (`circular_padding=True`):
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/indoor_%20circular_padding.png)

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionPanoramaPipeline[[diffusers.StableDiffusionPanoramaPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPanoramaPipeline</name><anchor>diffusers.StableDiffusionPanoramaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L158</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDIMScheduler"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPanoramaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L801</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = 512"}, {"name": "width", "val": ": typing.Optional[int] = 2048"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "view_batch_size", "val": ": int = 1"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "circular_padding", "val": ": bool = False"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ": typing.Any"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 2048) --
  The width in pixels of the generated image. The width is kept high because the pipeline is supposed
  generate panorama-like images.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  The timesteps at which to generate the images. If not specified, then the default timestep spacing
  strategy of the scheduler is used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **view_batch_size** (`int`, *optional*, defaults to 1) --
  The batch size to denoise split views. For some GPUs with high performance, higher view batch size can
  speedup the generation and increase the VRAM usage.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  A rescaling factor for the guidance embeddings. A value of 0.0 means no rescaling is applied.
- **circular_padding** (`bool`, *optional*, defaults to `False`) --
  If set to `True`, circular padding is applied to ensure there are no stitching artifacts. Circular
  padding allows the model to seamlessly generate a transition from the rightmost part of the image to
  the leftmost part, maintaining consistency in a 360-degree sense.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPanoramaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler

>>> model_ckpt = "stabilityai/stable-diffusion-2-base"
>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained(
...     model_ckpt, scheduler=scheduler, torch_dtype=torch.float16
... )

>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of the dolomites"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>decode_latents_with_padding</name><anchor>diffusers.StableDiffusionPanoramaPipeline.decode_latents_with_padding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L563</source><parameters>[{"name": "latents", "val": ": Tensor"}, {"name": "padding", "val": ": int = 8"}]</parameters><paramsdesc>- **latents** (torch.Tensor) -- The input latents to decode.
- **padding** (int, optional) -- The number of latents to add on each side for padding. Defaults to 8.</paramsdesc><paramgroups>0</paramgroups><rettype>torch.Tensor</rettype><retdesc>The decoded image with padding removed.</retdesc></docstring>

Decode the given latents with padding for circular inference.







Notes:
- The padding is added to remove boundary artifacts and improve the output quality.
- This would slightly increase the memory usage.
- The padding pixels are then removed from the decoded image.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionPanoramaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L283</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionPanoramaPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L701</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_views</name><anchor>diffusers.StableDiffusionPanoramaPipeline.get_views</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py#L731</source><parameters>[{"name": "panorama_height", "val": ": int"}, {"name": "panorama_width", "val": ": int"}, {"name": "window_size", "val": ": int = 64"}, {"name": "stride", "val": ": int = 8"}, {"name": "circular_padding", "val": ": bool = False"}]</parameters><paramsdesc>- **panorama_height** (int) -- The height of the panorama.
- **panorama_width** (int) -- The width of the panorama.
- **window_size** (int, optional) -- The size of the window. Defaults to 64.
- **stride** (int, optional) -- The stride value. Defaults to 8.
- **circular_padding** (bool, optional) -- Whether to apply circular padding. Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>List[Tuple[int, int, int, int]]</rettype><retdesc>A list of tuples representing the views. Each tuple contains four integers
representing the start and end coordinates of the window in the panorama.</retdesc></docstring>

Generates a list of views based on the given parameters. Here, we define the mappings F_i (see Eq. 7 in the
MultiDiffusion paper https://huggingface.co/papers/2302.08113). If panorama's height/width < window_size,
num_blocks of height/width should return 1.








</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/panorama.md" />

### Hunyuan Video
https://huggingface.co/docs/diffusers/main/api/pipelines/hunyuan_video.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

<div style="float: right;">
  <div class="flex flex-wrap space-x-1">
    <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener">
      <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
    </a>
  </div>
</div>

# HunyuanVideo

[HunyuanVideo](https://huggingface.co/papers/2412.03603) is a 13B parameter diffusion transformer model designed to be competitive with closed-source video foundation models and enable wider community access. This model uses a "dual-stream to single-stream" architecture to separately process the video and text tokens first, before concatenating and feeding them to the transformer to fuse the multimodal information. A pretrained multimodal large language model (MLLM) is used as the encoder because it has better image-text alignment, better image detail description and reasoning, and it can be used as a zero-shot learner if system instructions are added to user prompts. Finally, HunyuanVideo uses a 3D causal variational autoencoder to more efficiently process video data at the original resolution and frame rate.

You can find all the original HunyuanVideo checkpoints under the [Tencent](https://huggingface.co/tencent) organization.

> [!TIP]
> Click on the HunyuanVideo models in the right sidebar for more examples of video generation tasks.
>
> The examples below use a checkpoint from [hunyuanvideo-community](https://huggingface.co/hunyuanvideo-community) because the weights are stored in a layout compatible with Diffusers.

The example below demonstrates how to generate a video optimized for memory or inference speed.

<hfoptions id="usage">
<hfoption id="memory">

Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques.

The quantized HunyuanVideo model below requires ~14GB of VRAM.

```py
import torch
from diffusers import AutoModel, HunyuanVideoPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.utils import export_to_video

# quantize weights to int4 with bitsandbytes
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={
      "load_in_4bit": True,
      "bnb_4bit_quant_type": "nf4",
      "bnb_4bit_compute_dtype": torch.bfloat16
      },
    components_to_quantize="transformer"
)

pipeline = HunyuanVideoPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
)

# model-offloading and tiling
pipeline.enable_model_cpu_offload()
pipeline.vae.enable_tiling()

prompt = "A fluffy teddy bear sits on a bed of soft pillows surrounded by children's toys."
video = pipeline(prompt=prompt, num_frames=61, num_inference_steps=30).frames[0]
export_to_video(video, "output.mp4", fps=15)
```

</hfoption>
<hfoption id="inference speed">

[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster.

```py
import torch
from diffusers import AutoModel, HunyuanVideoPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.utils import export_to_video

# quantize weights to int4 with bitsandbytes
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={
      "load_in_4bit": True,
      "bnb_4bit_quant_type": "nf4",
      "bnb_4bit_compute_dtype": torch.bfloat16
      },
    components_to_quantize="transformer"
)

pipeline = HunyuanVideoPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
)

# model-offloading and tiling
pipeline.enable_model_cpu_offload()
pipeline.vae.enable_tiling()

# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
    pipeline.transformer, mode="max-autotune", fullgraph=True
)

prompt = "A fluffy teddy bear sits on a bed of soft pillows surrounded by children's toys."
video = pipeline(prompt=prompt, num_frames=61, num_inference_steps=30).frames[0]
export_to_video(video, "output.mp4", fps=15)
```

</hfoption>
</hfoptions>

## Notes

- HunyuanVideo supports LoRAs with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.HunyuanVideoLoraLoaderMixin.load_lora_weights).

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers import AutoModel, HunyuanVideoPipeline
  from diffusers.quantizers import PipelineQuantizationConfig
  from diffusers.utils import export_to_video

  # quantize weights to int4 with bitsandbytes
  pipeline_quant_config = PipelineQuantizationConfig(
      quant_backend="bitsandbytes_4bit",
      quant_kwargs={
        "load_in_4bit": True,
        "bnb_4bit_quant_type": "nf4",
        "bnb_4bit_compute_dtype": torch.bfloat16
        },
      components_to_quantize="transformer"
  )

  pipeline = HunyuanVideoPipeline.from_pretrained(
      "hunyuanvideo-community/HunyuanVideo",
      quantization_config=pipeline_quant_config,
      torch_dtype=torch.bfloat16,
  )

  # load LoRA weights
  pipeline.load_lora_weights("https://huggingface.co/lucataco/hunyuan-steamboat-willie-10", adapter_name="steamboat-willie")
  pipeline.set_adapters("steamboat-willie", 0.9)

  # model-offloading and tiling
  pipeline.enable_model_cpu_offload()
  pipeline.vae.enable_tiling()

  # use "In the style of SWR" to trigger the LoRA
  prompt = """
  In the style of SWR. A black and white animated scene featuring a fluffy teddy bear sits on a bed of soft pillows surrounded by children's toys.
  """
  video = pipeline(prompt=prompt, num_frames=61, num_inference_steps=30).frames[0]
  export_to_video(video, "output.mp4", fps=15)
  ```

  </details>

- Refer to the table below for recommended inference values.

  | parameter | recommended value |
  |---|---|
  | text encoder dtype | `torch.float16` |
  | transformer dtype | `torch.bfloat16` |
  | vae dtype | `torch.float16` |
  | `num_frames (k)` | 4 * `k` + 1 |

- Try lower `shift` values (`2.0` to `5.0`) for lower resolution videos and higher `shift` values (`7.0` to `12.0`) for higher resolution images.

## HunyuanVideoPipeline[[diffusers.HunyuanVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanVideoPipeline</name><anchor>diffusers.HunyuanVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L144</source><parameters>[{"name": "text_encoder", "val": ": LlamaModel"}, {"name": "tokenizer", "val": ": LlamaTokenizerFast"}, {"name": "transformer", "val": ": HunyuanVideoTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLHunyuanVideo"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "text_encoder_2", "val": ": CLIPTextModel"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}]</parameters><paramsdesc>- **text_encoder** (`LlamaModel`) --
  [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers).
- **tokenizer** (`LlamaTokenizer`) --
  Tokenizer from [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers).
- **transformer** ([HunyuanVideoTransformer3DModel](/docs/diffusers/main/en/api/models/hunyuan_video_transformer_3d#diffusers.HunyuanVideoTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLHunyuanVideo](/docs/diffusers/main/en/api/models/autoencoder_kl_hunyuan_video#diffusers.AutoencoderKLHunyuanVideo)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder_2** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer_2** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using HunyuanVideo.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HunyuanVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L491</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 720"}, {"name": "width", "val": ": int = 1280"}, {"name": "num_frames", "val": ": int = 129"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "prompt_template", "val": ": typing.Dict[str, typing.Any] = {'template': '<|start_header_id|>system<|end_header_id|>\\n\\nDescribe the video by detailing the following aspects: 1. The main content and theme of the video.2. The color, shape, size, texture, quantity, text, and spatial relationships of the objects.3. Actions, events, behaviors temporal relationships, physical movement changes of the objects.4. background environment, light, style and atmosphere.5. camera angles, movements, and transitions used in the video:<|eot_id|><|start_header_id|>user<|end_header_id|>\\n\\n{}<|eot_id|>', 'crop_start': 95}"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **height** (`int`, defaults to `720`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1280`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `129`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  True classifier-free guidance (guidance scale) is enabled when `true_cfg_scale` > 1 and
  `negative_prompt` is provided.
- **guidance_scale** (`float`, defaults to `6.0`) --
  Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `HunyuanVideoPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~HunyuanVideoPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `HunyuanVideoPipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.HunyuanVideoPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
>>> from diffusers.utils import export_to_video

>>> model_id = "hunyuanvideo-community/HunyuanVideo"
>>> transformer = HunyuanVideoTransformer3DModel.from_pretrained(
...     model_id, subfolder="transformer", torch_dtype=torch.bfloat16
... )
>>> pipe = HunyuanVideoPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.float16)
>>> pipe.vae.enable_tiling()
>>> pipe.to("cuda")

>>> output = pipe(
...     prompt="A cat walks on the grass, realistic",
...     height=320,
...     width=512,
...     num_frames=61,
...     num_inference_steps=30,
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=15)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.HunyuanVideoPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L431</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.HunyuanVideoPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L458</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.HunyuanVideoPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L418</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.HunyuanVideoPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py#L444</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div></div>

## HunyuanVideoPipelineOutput[[diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput</name><anchor>diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_output.py#L12</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for HunyuanVideo pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/hunyuan_video.md" />

### Consisid
https://huggingface.co/docs/diffusers/main/api/pipelines/consisid.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# ConsisID

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[Identity-Preserving Text-to-Video Generation by Frequency Decomposition](https://huggingface.co/papers/2411.17440) from Peking University & University of Rochester & etc, by Shenghai Yuan, Jinfa Huang, Xianyi He, Yunyang Ge, Yujun Shi, Liuhan Chen, Jiebo Luo, Li Yuan.

The abstract from the paper is:

*Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity. It is an important task in video generation but remains an open problem for generative models. This paper pushes the technical frontier of IPT2V in two directions that have not been resolved in the literature: (1) A tuning-free pipeline without tedious case-by-case finetuning, and (2) A frequency-aware heuristic identity-preserving Diffusion Transformer (DiT)-based control scheme. To achieve these goals, we propose **ConsisID**, a tuning-free DiT-based controllable IPT2V model to keep human-**id**entity **consis**tent in the generated video. Inspired by prior findings in frequency analysis of vision/diffusion transformers, it employs identity-control signals in the frequency domain, where facial features can be decomposed into low-frequency global features (e.g., profile, proportions) and high-frequency intrinsic features (e.g., identity markers that remain unaffected by pose changes). First, from a low-frequency perspective, we introduce a global facial extractor, which encodes the reference image and facial key points into a latent space, generating features enriched with low-frequency information. These features are then integrated into the shallow layers of the network to alleviate training challenges associated with DiT. Second, from a high-frequency perspective, we design a local facial extractor to capture high-frequency details and inject them into the transformer blocks, enhancing the model's ability to preserve fine-grained features. To leverage the frequency information for identity preservation, we propose a hierarchical training strategy, transforming a vanilla pre-trained video generation model into an IPT2V model. Extensive experiments demonstrate that our frequency-aware heuristic scheme provides an optimal control solution for DiT-based models. Thanks to this scheme, our **ConsisID** achieves excellent results in generating high-quality, identity-preserving videos, making strides towards more effective IPT2V. The model weight of ConsID is publicly available at https://github.com/PKU-YuanGroup/ConsisID.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [SHYuanBest](https://github.com/SHYuanBest). The original codebase can be found [here](https://github.com/PKU-YuanGroup/ConsisID). The original weights can be found under [hf.co/BestWishYsh](https://huggingface.co/BestWishYsh).

There are two official ConsisID checkpoints for identity-preserving text-to-video.

| checkpoints | recommended inference dtype |
|:---:|:---:|
| [`BestWishYsh/ConsisID-preview`](https://huggingface.co/BestWishYsh/ConsisID-preview) | torch.bfloat16 |
| [`BestWishYsh/ConsisID-1.5`](https://huggingface.co/BestWishYsh/ConsisID-preview) | torch.bfloat16 |

### Memory optimization

ConsisID requires about 44 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to [this](https://gist.github.com/SHYuanBest/bc4207c36f454f9e969adbb50eaf8258) script.

| Feature (overlay the previous) | Max Memory Allocated | Max Memory Reserved |
| :----------------------------- | :------------------- | :------------------ |
| -                              | 37 GB                | 44 GB               |
| enable_model_cpu_offload       | 22 GB                | 25 GB               |
| enable_sequential_cpu_offload  | 16 GB                | 22 GB               |
| vae.enable_slicing             | 16 GB                | 22 GB               |
| vae.enable_tiling              | 5 GB                 | 7 GB                |

## ConsisIDPipeline[[diffusers.ConsisIDPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ConsisIDPipeline</name><anchor>diffusers.ConsisIDPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consisid/pipeline_consisid.py#L250</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLCogVideoX"}, {"name": "transformer", "val": ": ConsisIDTransformer3DModel"}, {"name": "scheduler", "val": ": CogVideoXDPMScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. ConsisID uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([ConsisIDTransformer3DModel](/docs/diffusers/main/en/api/models/consisid_transformer3d#diffusers.ConsisIDTransformer3DModel)) --
  A text conditioned `ConsisIDTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-video generation using ConsisID.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ConsisIDPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consisid/pipeline_consisid.py#L661</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 480"}, {"name": "width", "val": ": int = 720"}, {"name": "num_frames", "val": ": int = 49"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "use_dynamic_cfg", "val": ": bool = False"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "id_vit_hidden", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "id_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "kps_cond", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The width in pixels of the generated image. This is set to 720 by default for the best results.
- **num_frames** (`int`, defaults to `49`) --
  Number of frames to generate. Must be divisible by self.vae_scale_factor_temporal. Generated video will
  contain 1 extra frame because ConsisID is conditioned with (num_seconds * fps + 1) frames where
  num_seconds is 6 and fps is 4. However, since videos can be saved at any fps, the only condition that
  needs to be satisfied is that of divisibility mentioned above.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 6) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **use_dynamic_cfg** (`bool`, *optional*, defaults to `False`) --
  If True, dynamically adjusts the guidance scale during inference. This allows the model to use a
  progressive guidance scale, improving the balance between text-guided generation and image quality over
  the course of the inference steps. Typically, early inference steps use a higher guidance scale for
  more faithful image generation, while later steps reduce it for more diverse and natural results.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `226`) --
  Maximum sequence length in encoded prompt. Must be consistent with
  `self.transformer.config.max_text_seq_length` otherwise may lead to poor results.
- **id_vit_hidden** (`Optional[torch.Tensor]`, *optional*) --
  The tensor representing the hidden features extracted from the face model, which are used to condition
  the local facial extractor. This is crucial for the model to obtain high-frequency information of the
  face. If not provided, the local facial extractor will not run normally.
- **id_cond** (`Optional[torch.Tensor]`, *optional*) --
  The tensor representing the hidden features extracted from the clip model, which are used to condition
  the local facial extractor. This is crucial for the model to edit facial features If not provided, the
  local facial extractor will not run normally.
- **kps_cond** (`Optional[torch.Tensor]`, *optional*) --
  A tensor that determines whether the global facial extractor use keypoint information for conditioning.
  If provided, this tensor controls whether facial keypoints such as eyes, nose, and mouth landmarks are
  used during the generation process. This helps ensure the model retains more facial low-frequency
  information.</paramsdesc><paramgroups>0</paramgroups><rettype>[ConsisIDPipelineOutput](/docs/diffusers/main/en/api/pipelines/consisid#diffusers.pipelines.consisid.pipeline_output.ConsisIDPipelineOutput) or `tuple`</rettype><retdesc>[ConsisIDPipelineOutput](/docs/diffusers/main/en/api/pipelines/consisid#diffusers.pipelines.consisid.pipeline_output.ConsisIDPipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.ConsisIDPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import ConsisIDPipeline
>>> from diffusers.pipelines.consisid.consisid_utils import prepare_face_models, process_face_embeddings_infer
>>> from diffusers.utils import export_to_video
>>> from huggingface_hub import snapshot_download

>>> snapshot_download(repo_id="BestWishYsh/ConsisID-preview", local_dir="BestWishYsh/ConsisID-preview")
>>> (
...     face_helper_1,
...     face_helper_2,
...     face_clip_model,
...     face_main_model,
...     eva_transform_mean,
...     eva_transform_std,
... ) = prepare_face_models("BestWishYsh/ConsisID-preview", device="cuda", dtype=torch.bfloat16)
>>> pipe = ConsisIDPipeline.from_pretrained("BestWishYsh/ConsisID-preview", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> # ConsisID works well with long and well-described prompts. Make sure the face in the image is clearly visible (e.g., preferably half-body or full-body).
>>> prompt = "The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel."
>>> image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_input.png?download=true"

>>> id_cond, id_vit_hidden, image, face_kps = process_face_embeddings_infer(
...     face_helper_1,
...     face_clip_model,
...     face_helper_2,
...     eva_transform_mean,
...     eva_transform_std,
...     face_main_model,
...     "cuda",
...     torch.bfloat16,
...     image,
...     is_align_face=True,
... )

>>> video = pipe(
...     image=image,
...     prompt=prompt,
...     num_inference_steps=50,
...     guidance_scale=6.0,
...     use_dynamic_cfg=False,
...     id_vit_hidden=id_vit_hidden,
...     id_cond=id_cond,
...     kps_cond=face_kps,
...     generator=torch.Generator("cuda").manual_seed(42),
... )
>>> export_to_video(video.frames[0], "output.mp4", fps=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.ConsisIDPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consisid/pipeline_consisid.py#L355</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## ConsisIDPipelineOutput[[diffusers.pipelines.consisid.pipeline_output.ConsisIDPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.consisid.pipeline_output.ConsisIDPipelineOutput</name><anchor>diffusers.pipelines.consisid.pipeline_output.ConsisIDPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consisid/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for ConsisID pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/consisid.md" />

### ControlNet-XS
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnetxs.md

# ControlNet-XS

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

ControlNet-XS was introduced in [ControlNet-XS](https://vislearn.github.io/ControlNet-XS/) by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the [original ControlNet](https://huggingface.co/papers/2302.05543) can be made much smaller and still produce good results.

Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster ([see benchmark](https://github.com/UmerHA/controlnet-xs-benchmark/blob/main/Speed%20Benchmark.ipynb) with StableDiffusion-XL) and uses ~45% less memory.

Here's the overview from the [project page](https://vislearn.github.io/ControlNet-XS/):

*With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license.*

This model was contributed by [UmerHA](https://twitter.com/UmerHAdil). ❤️

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionControlNetXSPipeline[[diffusers.StableDiffusionControlNetXSPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetXSPipeline</name><anchor>diffusers.StableDiffusionControlNetXSPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs.py#L100</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.controlnets.controlnet_xs.UNetControlNetXSModel]"}, {"name": "controlnet", "val": ": ControlNetXSAdapter"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetControlNetXSModel to denoise the encoded image latents.
- **controlnet** (`ControlNetXSAdapter`) --
  A `ControlNetXSAdapter` to be used in combination with `unet` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion with ControlNet-XS guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [loaders.FromSingleFileMixin.from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionControlNetXSPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs.py#L643</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_guidance_start", "val": ": float = 0.0"}, {"name": "control_guidance_end", "val": ": float = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetXSPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionControlNetXSPipeline, ControlNetXSAdapter
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )

>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5

>>> controlnet = ControlNetXSAdapter.from_pretrained(
...     "UmerHA/Testing-ConrolNetXS-SD2.1-canny", torch_dtype=torch.float16
... )
>>> pipe = StableDiffusionControlNetXSPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)
>>> # generate image
>>> image = pipe(
...     prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetXSPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs.py#L232</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnetxs.md" />

### ControlNet with Stable Diffusion XL
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sdxl.md

# ControlNet with Stable Diffusion XL

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 [Diffusers](https://huggingface.co/diffusers) Hub organization, and browse [community-trained](https://huggingface.co/models?other=stable-diffusion-xl&other=controlnet) checkpoints on the Hub.

> [!WARNING]
> 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) and leave us feedback on how we can improve!

If you don't see a checkpoint you're interested in, you can train your own SDXL ControlNet with our [training script](../../../../../examples/controlnet/README_sdxl).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionXLControlNetPipeline[[diffusers.StableDiffusionXLControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetPipeline</name><anchor>diffusers.StableDiffusionXLControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py#L184</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_encoder_2** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Second frozen text-encoder
  ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **tokenizer_2** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings should always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark](https://github.com/ShieldMnt/invisible-watermark/) library to
  watermark output images. If not defined, it defaults to `True` if the package is installed; otherwise no
  watermarker is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py#L1010</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )

>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5  # recommended for good generalization
>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)

>>> # generate image
>>> image = pipe(
...     prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py#L303</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLControlNetPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py#L949</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLControlNetImg2ImgPipeline[[diffusers.StableDiffusionXLControlNetImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetImg2ImgPipeline</name><anchor>diffusers.StableDiffusionXLControlNetImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py#L167</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets
  as a list, the outputs from each ControlNet are added together to create one combined additional
  conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires an `aesthetic_score` condition to be passed during inference. Also see the
  config of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py#L1090</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.8"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The initial image will be used as the starting point for the image generation process. Can also accept
  image latents as `image`, if passing latents directly, it will not be encoded again.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
  the type is specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also
  be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If height
  and/or width are passed, `image` is resized according to them. If multiple ControlNets are specified in
  init, images must be passed as a list such that each element of the list can be correctly batched for
  input to a single controlnet.
- **height** (`int`, *optional*, defaults to the size of control_image) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to the size of control_image) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
  corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
  you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the controlnet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the controlnet stops applying.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) if `return_dict` is True, otherwise a `tuple`
containing the output images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetImg2ImgPipeline.__call__.example">

Examples:
```py
>>> # pip install accelerate transformers safetensors diffusers

>>> import torch
>>> import numpy as np
>>> from PIL import Image

>>> from transformers import DPTImageProcessor, DPTForDepthEstimation
>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL
>>> from diffusers.utils import load_image


>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda")
>>> feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")
>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-depth-sdxl-1.0-small",
...     variant="fp16",
...     use_safetensors=True,
...     torch_dtype=torch.float16,
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     controlnet=controlnet,
...     vae=vae,
...     variant="fp16",
...     use_safetensors=True,
...     torch_dtype=torch.float16,
... )
>>> pipe.enable_model_cpu_offload()


>>> def get_depth_map(image):
...     image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda")
...     with torch.no_grad(), torch.autocast("cuda"):
...         depth_map = depth_estimator(image).predicted_depth

...     depth_map = torch.nn.functional.interpolate(
...         depth_map.unsqueeze(1),
...         size=(1024, 1024),
...         mode="bicubic",
...         align_corners=False,
...     )
...     depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
...     depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
...     depth_map = (depth_map - depth_min) / (depth_max - depth_min)
...     image = torch.cat([depth_map] * 3, dim=1)
...     image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
...     image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
...     return image


>>> prompt = "A robot, 4k photo"
>>> image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... ).resize((1024, 1024))
>>> controlnet_conditioning_scale = 0.5  # recommended for good generalization
>>> depth_image = get_depth_map(image)

>>> images = pipe(
...     prompt,
...     image=image,
...     control_image=depth_image,
...     strength=0.99,
...     num_inference_steps=50,
...     controlnet_conditioning_scale=controlnet_conditioning_scale,
... ).images
>>> images[0].save(f"robot_cat.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py#L297</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionXLControlNetInpaintPipeline[[diffusers.StableDiffusionXLControlNetInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetInpaintPipeline</name><anchor>diffusers.StableDiffusionXLControlNetInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py#L174</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py#L1178</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 0.9999) --
  Conceptually, indicates how much to transform the masked portion of the reference `image`. Must be
  between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the
  `strength`. The number of denoising steps depends on the amount of noise initially added. When
  `strength` is 1, added noise will be maximum and the denoising process will run for the full number of
  iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores the masked
  portion of the reference `image`. Note that in the case of `denoising_start` being declared as an
  integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. `tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetInpaintPipeline.__call__.example">

Examples:
```py
>>> # !pip install transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler
>>> from diffusers.utils import load_image
>>> from PIL import Image
>>> import numpy as np
>>> import torch

>>> init_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png"
... )
>>> init_image = init_image.resize((1024, 1024))

>>> generator = torch.Generator(device="cpu").manual_seed(1)

>>> mask_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png"
... )
>>> mask_image = mask_image.resize((1024, 1024))


>>> def make_canny_condition(image):
...     image = np.array(image)
...     image = cv2.Canny(image, 100, 200)
...     image = image[:, :, None]
...     image = np.concatenate([image, image, image], axis=2)
...     image = Image.fromarray(image)
...     return image


>>> control_image = make_canny_condition(init_image)

>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> pipe.enable_model_cpu_offload()

>>> # generate image
>>> image = pipe(
...     "a handsome man with ray-ban sunglasses",
...     num_inference_steps=20,
...     generator=generator,
...     eta=1.0,
...     image=init_image,
...     mask_image=mask_image,
...     control_image=control_image,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py#L295</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_sdxl.md" />

### Image-to-Video Generation with PIA (Personalized Image Animator)
https://huggingface.co/docs/diffusers/main/api/pipelines/pia.md

# Image-to-Video Generation with PIA (Personalized Image Animator)

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

## Overview

[PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models](https://huggingface.co/papers/2312.13964) by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen

Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance.

[Project page](https://pi-animator.github.io/)

## Available Pipelines

| Pipeline | Tasks | Demo
|---|---|:---:|
| [PIAPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pia/pipeline_pia.py) | *Image-to-Video Generation with PIA* |

## Available checkpoints

Motion Adapter checkpoints for PIA can be found under the [OpenMMLab org](https://huggingface.co/openmmlab/PIA-condition-adapter). These checkpoints are meant to work with any model based on Stable Diffusion 1.5

## Usage example

PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer.

The following example demonstrates how to use PIA to generate a video from a single image.

```python
import torch
from diffusers import (
    EulerDiscreteScheduler,
    MotionAdapter,
    PIAPipeline,
)
from diffusers.utils import export_to_gif, load_image

adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter")
pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16)

pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()

image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
)
image = image.resize((512, 512))
prompt = "cat in a field"
negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"

generator = torch.Generator("cpu").manual_seed(0)
output = pipe(image=image, prompt=prompt, generator=generator)
frames = output.frames[0]
export_to_gif(frames, "pia-animation.gif")
```

Here are some sample outputs:

<table>
    <tr>
        <td><center>
        cat in a field.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pia-default-output.gif"
            alt="cat in a field"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>


> [!TIP]
> If you plan on using a scheduler that can clip samples, make sure to disable it by setting `clip_sample=False` in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to `linear`.

## Using FreeInit

[FreeInit: Bridging Initialization Gap in Video Diffusion Models](https://huggingface.co/papers/2312.07537) by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu.

FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper.

The following example demonstrates the usage of FreeInit.

```python
import torch
from diffusers import (
    DDIMScheduler,
    MotionAdapter,
    PIAPipeline,
)
from diffusers.utils import export_to_gif, load_image

adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter")
pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter)

# enable FreeInit
# Refer to the enable_free_init documentation for a full list of configurable parameters
pipe.enable_free_init(method="butterworth", use_fast_sampling=True)

# Memory saving options
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()

pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
)
image = image.resize((512, 512))
prompt = "cat in a field"
negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"

generator = torch.Generator("cpu").manual_seed(0)

output = pipe(image=image, prompt=prompt, generator=generator)
frames = output.frames[0]
export_to_gif(frames, "pia-freeinit-animation.gif")
```

<table>
    <tr>
        <td><center>
        cat in a field.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pia-freeinit-output-cat.gif"
            alt="cat in a field"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>


> [!WARNING]
> FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the `num_iters` parameter that is set when enabling it. Setting the `use_fast_sampling` parameter to `True` can improve the overall performance (at the cost of lower quality compared to when `use_fast_sampling=False` but still better results than vanilla video generation models).

## PIAPipeline[[diffusers.PIAPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PIAPipeline</name><anchor>diffusers.PIAPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pia/pipeline_pia.py#L134</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "motion_adapter", "val": ": typing.Optional[diffusers.models.unets.unet_motion_model.MotionAdapter] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.PIAPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pia/pipeline_pia.py#L672</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_frames", "val": ": typing.Optional[int] = 16"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "motion_scale", "val": ": int = 0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to be used for video generation.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **motion_scale** -- (`int`, *optional*, defaults to 0):
  Parameter that controls the amount and type of motion that is added to the image. Increasing the value
  increases the amount of motion, while specific ranges of values control the type of motion that is
  added. Must be between 0 and 8. Set between 0-2 to only increase the amount of motion. Set between 3-5
  to create looping motion. Set between 6-8 to perform motion with image style transfer.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[PIAPipelineOutput](/docs/diffusers/main/en/api/pipelines/pia#diffusers.pipelines.pia.PIAPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [PIAPipelineOutput](/docs/diffusers/main/en/api/pipelines/pia#diffusers.pipelines.pia.PIAPipelineOutput) is returned, otherwise a
`tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.PIAPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import EulerDiscreteScheduler, MotionAdapter, PIAPipeline
>>> from diffusers.utils import export_to_gif, load_image

>>> adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter")
>>> pipe = PIAPipeline.from_pretrained(
...     "SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16
... )

>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
>>> image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true"
... )
>>> image = image.resize((512, 512))
>>> prompt = "cat in a hat"
>>> negative_prompt = "wrong white balance, dark, sketches, worst quality, low quality, deformed, distorted"
>>> generator = torch.Generator("cpu").manual_seed(0)
>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator)
>>> frames = output.frames[0]
>>> export_to_gif(frames, "pia-animation.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.PIAPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pia/pipeline_pia.py#L213</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

    - enable_freeu
    - disable_freeu
    - enable_free_init
    - disable_free_init
    - enable_vae_slicing
    - disable_vae_slicing
    - enable_vae_tiling
    - disable_vae_tiling

## PIAPipelineOutput[[diffusers.pipelines.pia.PIAPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.pia.PIAPipelineOutput</name><anchor>diffusers.pipelines.pia.PIAPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pia/pipeline_pia.py#L120</source><parameters>[{"name": "frames", "val": ": typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]]"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  Nested list of length `batch_size` with denoised PIL image sequences of length `num_frames`, NumPy array of
  shape `(batch_size, num_frames, channels, height, width, Torch tensor of shape `(batch_size, num_frames,
  channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for PIAPipeline.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/pia.md" />

### Stable Cascade
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_cascade.md

# Stable Cascade

This model is built upon the [Würstchen](https://openreview.net/forum?id=gU58d5QeGv) architecture and its main
difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this
important? The smaller the latent space, the **faster** you can run inference and the **cheaper** the training becomes.
How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being
encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a
1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the
highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable
Diffusion 1.5.

Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions
like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well.

The original codebase can be found at [Stability-AI/StableCascade](https://github.com/Stability-AI/StableCascade).

## Model Overview
Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images,
hence the name "Stable Cascade".

Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion.
However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a
spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves
a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the
image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible
for generating the small 24 x 24 latents given a text prompt.

The Stage C model operates on the small 24 x 24 latents and denoises the latents conditioned on text prompts. The model is also the largest component in the Cascade pipeline and is meant to be used with the `StableCascadePriorPipeline`

The Stage B and Stage A models are used with the `StableCascadeDecoderPipeline` and are responsible for generating the final image given the small 24 x 24 latents.

> [!WARNING]
> There are some restrictions on data types that can be used with the Stable Cascade models. The official checkpoints for the  `StableCascadePriorPipeline` do not support the `torch.float16` data type. Please use `torch.bfloat16` instead.
>
> In order to use the `torch.bfloat16` data type with the `StableCascadeDecoderPipeline` you need to have PyTorch 2.2.0 or higher installed. This also means that using the `StableCascadeCombinedPipeline` with `torch.bfloat16` requires PyTorch 2.2.0 or higher, since it calls the `StableCascadeDecoderPipeline` internally.
>
> If it is not possible to install PyTorch 2.2.0 or higher in your environment, the `StableCascadeDecoderPipeline` can be used on its own with the `torch.float16` data type. You can download the full precision or `bf16` variant weights for the pipeline and cast the weights to `torch.float16`.

## Usage example

```python
import torch
from diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", variant="bf16", torch_dtype=torch.bfloat16)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", variant="bf16", torch_dtype=torch.float16)

prior.enable_model_cpu_offload()
prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder.enable_model_cpu_offload()
decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings.to(torch.float16),
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
).images[0]
decoder_output.save("cascade.png")
```

## Using the Lite Versions of the Stage B and Stage C models

```python
import torch
from diffusers import (
    StableCascadeDecoderPipeline,
    StableCascadePriorPipeline,
    StableCascadeUNet,
)

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade-prior", subfolder="prior_lite")
decoder_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade", subfolder="decoder_lite")

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", prior=prior_unet)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", decoder=decoder_unet)

prior.enable_model_cpu_offload()
prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder.enable_model_cpu_offload()
decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings,
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
).images[0]
decoder_output.save("cascade.png")
```

## Loading original checkpoints with `from_single_file`

Loading the original format checkpoints is supported via `from_single_file` method in the StableCascadeUNet.

```python
import torch
from diffusers import (
    StableCascadeDecoderPipeline,
    StableCascadePriorPipeline,
    StableCascadeUNet,
)

prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""

prior_unet = StableCascadeUNet.from_single_file(
    "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_bf16.safetensors",
    torch_dtype=torch.bfloat16
)
decoder_unet = StableCascadeUNet.from_single_file(
    "https://huggingface.co/stabilityai/stable-cascade/blob/main/stage_b_bf16.safetensors",
    torch_dtype=torch.bfloat16
)

prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", prior=prior_unet, torch_dtype=torch.bfloat16)
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", decoder=decoder_unet, torch_dtype=torch.bfloat16)

prior.enable_model_cpu_offload()
prior_output = prior(
    prompt=prompt,
    height=1024,
    width=1024,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=1,
    num_inference_steps=20
)

decoder.enable_model_cpu_offload()
decoder_output = decoder(
    image_embeddings=prior_output.image_embeddings,
    prompt=prompt,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=10
).images[0]
decoder_output.save("cascade-single-file.png")
```

## Uses

### Direct Use

The model is intended for research purposes for now. Possible research areas and tasks include

- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.

Excluded uses are described below.

### Out-of-Scope Use

The model was not trained to be factual or true representations of people or events,
and therefore using the model to generate such content is out-of-scope for the abilities of this model.
The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).

## Limitations and Bias

### Limitations
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.


## StableCascadeCombinedPipeline[[diffusers.StableCascadeCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableCascadeCombinedPipeline</name><anchor>diffusers.StableCascadeCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py#L45</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "decoder", "val": ": StableCascadeUNet"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "vqgan", "val": ": PaellaVQModel"}, {"name": "prior_prior", "val": ": StableCascadeUNet"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "prior_feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "prior_image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  The decoder tokenizer to be used for text inputs.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  The decoder text encoder to be used for text inputs.
- **decoder** (`StableCascadeUNet`) --
  The decoder model to be used for decoder image generation pipeline.
- **scheduler** (`DDPMWuerstchenScheduler`) --
  The scheduler to be used for decoder image generation pipeline.
- **vqgan** (`PaellaVQModel`) --
  The VQGAN model to be used for decoder image generation pipeline.
- **prior_prior** (`StableCascadeUNet`) --
  The prior model to be used for prior pipeline.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  The prior text encoder to be used for text inputs.
- **prior_tokenizer** (`CLIPTokenizer`) --
  The prior tokenizer to be used for text inputs.
- **prior_scheduler** (`DDPMWuerstchenScheduler`) --
  The scheduler to be used for prior pipeline.
- **prior_feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  Model that extracts features from generated images to be used as inputs for the `image_encoder`.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for text-to-image generation using Stable Cascade.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableCascadeCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py#L156</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "images", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_num_inference_steps", "val": ": int = 60"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "decoder_guidance_scale", "val": ": float = 0.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "prior_callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "prior_callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation for the prior and decoder.
- **images** (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, *optional*) --
  The images to guide the image generation for the prior.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, text embeddings will be generated from `prompt` input argument.
- **prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
  prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **negative_prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
  prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `prior_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `prior_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`Union[int, Dict[float, int]]`, *optional*, defaults to 60) --
  The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. For more specific timestep spacing, you can pass customized
  `prior_timesteps`
- **num_inference_steps** (`int`, *optional*, defaults to 12) --
  The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at
  the expense of slower inference. For more specific timestep spacing, you can pass customized
  `timesteps`
- **decoder_guidance_scale** (`float`, *optional*, defaults to 0.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **prior_callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep:
  int, callback_kwargs: Dict)`.
- **prior_callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
  list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
  the `._callback_tensor_inputs` attribute of your pipeline class.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple` [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) if `return_dict` is True,
otherwise a `tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableCascadeCombinedPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableCascadeCombinedPipeline

>>> pipe = StableCascadeCombinedPipeline.from_pretrained(
...     "stabilityai/stable-cascade", variant="bf16", torch_dtype=torch.bfloat16
... )
>>> pipe.enable_model_cpu_offload()
>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> images = pipe(prompt=prompt)
```

</ExampleCodeBlock>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.StableCascadeCombinedPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py#L128</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.StableCascadeCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_combined.py#L138</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models (`unet`, `text_encoder`, `vae`, and `safety checker` state dicts) to CPU using 🤗
Accelerate, significantly reducing memory usage. Models are moved to a `torch.device('meta')` and loaded on a
GPU only when their specific submodule's `forward` method is called. Offloading happens on a submodule basis.
Memory savings are higher than using `enable_model_cpu_offload`, but performance is lower.


</div></div>

## StableCascadePriorPipeline[[diffusers.StableCascadePriorPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableCascadePriorPipeline</name><anchor>diffusers.StableCascadePriorPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py#L80</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior", "val": ": StableCascadeUNet"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "resolution_multiple", "val": ": float = 42.67"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}]</parameters><paramsdesc>- **prior** (`StableCascadeUNet`) --
  The Stable Cascade prior to approximate the image embedding from the text and/or image embedding.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder
  ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  Model that extracts features from generated images to be used as inputs for the `image_encoder`.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **scheduler** (`DDPMWuerstchenScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **resolution_multiple** ('float', *optional*, defaults to 42.67) --
  Default resolution for multiple images generated.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating image prior for Stable Cascade.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableCascadePriorPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py#L373</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "images", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]] = None"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **height** (`int`, *optional*, defaults to 1024) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 1024) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 60) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 8.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `decoder_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `decoder_guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt`
  input argument.
- **image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings. Can be used to easily tweak image inputs, *e.g.* prompt weighting. If
  not provided, image embeddings will be generated from `image` input argument if existing.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>`StableCascadePriorPipelineOutput` or `tuple` `StableCascadePriorPipelineOutput` if `return_dict` is
True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated image
embeddings.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableCascadePriorPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableCascadePriorPipeline

>>> prior_pipe = StableCascadePriorPipeline.from_pretrained(
...     "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16
... ).to("cuda")

>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt)
```

</ExampleCodeBlock>





</div></div>

## StableCascadePriorPipelineOutput[[diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput</name><anchor>diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py#L60</source><parameters>[{"name": "image_embeddings", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "prompt_embeds", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "prompt_embeds_pooled", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "negative_prompt_embeds", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "negative_prompt_embeds_pooled", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}]</parameters><paramsdesc>- **image_embeddings** (`torch.Tensor` or `np.ndarray`) --
  Prior image embeddings for text prompt
- **prompt_embeds** (`torch.Tensor`) --
  Text embeddings for the prompt.
- **negative_prompt_embeds** (`torch.Tensor`) --
  Text embeddings for the negative prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for WuerstchenPriorPipeline.




</div>

## StableCascadeDecoderPipeline[[diffusers.StableCascadeDecoderPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableCascadeDecoderPipeline</name><anchor>diffusers.StableCascadeDecoderPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py#L58</source><parameters>[{"name": "decoder", "val": ": StableCascadeUNet"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "vqgan", "val": ": PaellaVQModel"}, {"name": "latent_dim_scale", "val": ": float = 10.67"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  The CLIP tokenizer.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  The CLIP text encoder.
- **decoder** (`StableCascadeUNet`) --
  The Stable Cascade decoder unet.
- **vqgan** (`PaellaVQModel`) --
  The VQGAN model.
- **scheduler** (`DDPMWuerstchenScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **latent_dim_scale** (float, `optional`, defaults to 10.67) --
  Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are
  height=24 and width=24, the VQ latent shape needs to be height=int(24*10.67)=256 and
  width=int(24*10.67)=256 in order to match the training conditions.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating images from the Stable Cascade model.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableCascadeDecoderPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_cascade/pipeline_stable_cascade.py#L302</source><parameters>[{"name": "image_embeddings", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 10"}, {"name": "guidance_scale", "val": ": float = 0.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_pooled", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **image_embedding** (`torch.Tensor` or `List[torch.Tensor]`) --
  Image Embeddings either extracted from an image or generated by a Prior Model.
- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **num_inference_steps** (`int`, *optional*, defaults to 12) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 0.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `decoder_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `decoder_guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_prompt_embeds_pooled** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds_pooled will be generated from `negative_prompt`
  input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple` [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) if `return_dict` is True,
otherwise a `tuple`. When returning a tuple, the first element is a list with the generated image
embeddings.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableCascadeDecoderPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableCascadePriorPipeline, StableCascadeDecoderPipeline

>>> prior_pipe = StableCascadePriorPipeline.from_pretrained(
...     "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16
... ).to("cuda")
>>> gen_pipe = StableCascadeDecoderPipeline.from_pretrain(
...     "stabilityai/stable-cascade", torch_dtype=torch.float16
... ).to("cuda")

>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt)
>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt)
```

</ExampleCodeBlock>





</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_cascade.md" />

### Easyanimate
https://huggingface.co/docs/diffusers/main/api/pipelines/easyanimate.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# EasyAnimate
[EasyAnimate](https://github.com/aigc-apps/EasyAnimate) by Alibaba PAI.

The description from it's GitHub page:
*EasyAnimate is a pipeline based on the transformer architecture, designed for generating AI images and videos, and for training baseline models and Lora models for Diffusion Transformer. We support direct prediction from pre-trained EasyAnimate models, allowing for the generation of videos with various resolutions, approximately 6 seconds in length, at 8fps (EasyAnimateV5.1, 1 to 49 frames). Additionally, users can train their own baseline and Lora models for specific style transformations.*

This pipeline was contributed by [bubbliiiing](https://github.com/bubbliiiing). The original codebase can be found [here](https://huggingface.co/alibaba-pai). The original weights can be found under [hf.co/alibaba-pai](https://huggingface.co/alibaba-pai).

There are two official EasyAnimate checkpoints for text-to-video and video-to-video.

| checkpoints | recommended inference dtype |
|:---:|:---:|
| [`alibaba-pai/EasyAnimateV5.1-12b-zh`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh) | torch.float16 |
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-InP`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP) | torch.float16 |

There is one official EasyAnimate checkpoints available for image-to-video and video-to-video.

| checkpoints | recommended inference dtype |
|:---:|:---:|
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-InP`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP) | torch.float16 |

There are two official EasyAnimate checkpoints available for control-to-video.

| checkpoints | recommended inference dtype |
|:---:|:---:|
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-Control`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control) | torch.float16 |
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-Control-Camera`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control-Camera) | torch.float16 |

For the EasyAnimateV5.1 series:
- Text-to-video (T2V) and Image-to-video (I2V) works for multiple resolutions. The width and height can vary from 256 to 1024.
- Both T2V and I2V models support generation with 1~49 frames and work best at this value. Exporting videos at 8 FPS is recommended.

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [EasyAnimatePipeline](/docs/diffusers/main/en/api/pipelines/easyanimate#diffusers.EasyAnimatePipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, EasyAnimateTransformer3DModel, EasyAnimatePipeline
from diffusers.utils import export_to_video

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = EasyAnimateTransformer3DModel.from_pretrained(
    "alibaba-pai/EasyAnimateV5.1-12b-zh",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = EasyAnimatePipeline.from_pretrained(
    "alibaba-pai/EasyAnimateV5.1-12b-zh",
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "A cat walks on the grass, realistic style."
negative_prompt = "bad detailed"
video = pipeline(prompt=prompt, negative_prompt=negative_prompt, num_frames=49, num_inference_steps=30).frames[0]
export_to_video(video, "cat.mp4", fps=8)
```

## EasyAnimatePipeline[[diffusers.EasyAnimatePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EasyAnimatePipeline</name><anchor>diffusers.EasyAnimatePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/easyanimate/pipeline_easyanimate.py#L186</source><parameters>[{"name": "vae", "val": ": AutoencoderKLMagvit"}, {"name": "text_encoder", "val": ": typing.Union[transformers.models.qwen2_vl.modeling_qwen2_vl.Qwen2VLForConditionalGeneration, transformers.models.bert.modeling_bert.BertModel]"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.qwen2.tokenization_qwen2.Qwen2Tokenizer, transformers.models.bert.tokenization_bert.BertTokenizer]"}, {"name": "transformer", "val": ": EasyAnimateTransformer3DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderKLMagvit](/docs/diffusers/main/en/api/models/autoencoderkl_magvit#diffusers.AutoencoderKLMagvit)) --
  Variational Auto-Encoder (VAE) Model to encode and decode video to and from latent representations.
- **text_encoder** (Optional[`~transformers.Qwen2VLForConditionalGeneration`, `~transformers.BertModel`]) --
  EasyAnimate uses [qwen2 vl](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) in V5.1.
- **tokenizer** (Optional[`~transformers.Qwen2Tokenizer`, `~transformers.BertTokenizer`]) --
  A `Qwen2Tokenizer` or `BertTokenizer` to tokenize text.
- **transformer** ([EasyAnimateTransformer3DModel](/docs/diffusers/main/en/api/models/easyanimate_transformer3d#diffusers.EasyAnimateTransformer3DModel)) --
  The EasyAnimate model designed by EasyAnimate Team.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with EasyAnimate to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using EasyAnimate.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

EasyAnimate uses one text encoder [qwen2 vl](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) in V5.1.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.EasyAnimatePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/easyanimate/pipeline_easyanimate.py#L524</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = 49"}, {"name": "height", "val": ": typing.Optional[int] = 512"}, {"name": "width", "val": ": typing.Optional[int] = 512"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "guidance_rescale", "val": ": float = 0.0"}]</parameters><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

Generates images or video using the EasyAnimate pipeline based on the provided prompts.


<ExampleCodeBlock anchor="diffusers.EasyAnimatePipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import EasyAnimatePipeline
>>> from diffusers.utils import export_to_video

>>> # Models: "alibaba-pai/EasyAnimateV5.1-12b-zh"
>>> pipe = EasyAnimatePipeline.from_pretrained(
...     "alibaba-pai/EasyAnimateV5.1-7b-zh-diffusers", torch_dtype=torch.float16
... ).to("cuda")
>>> prompt = (
...     "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
...     "The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
...     "pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
...     "casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
...     "The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
...     "atmosphere of this unique musical performance."
... )
>>> sample_size = (512, 512)
>>> video = pipe(
...     prompt=prompt,
...     guidance_scale=6,
...     negative_prompt="bad detailed",
...     height=sample_size[0],
...     width=sample_size[1],
...     num_inference_steps=50,
... ).frames[0]
>>> export_to_video(video, "output.mp4", fps=8)
```

</ExampleCodeBlock>

prompt (`str` or `List[str]`, *optional*):
Text prompts to guide the image or video generation. If not provided, use `prompt_embeds` instead.
num_frames (`int`, *optional*):
Length of the generated video (in frames).
height (`int`, *optional*):
Height of the generated image in pixels.
width (`int`, *optional*):
Width of the generated image in pixels.
num_inference_steps (`int`, *optional*, defaults to 50):
Number of denoising steps during generation. More steps generally yield higher quality images but slow
down inference.
guidance_scale (`float`, *optional*, defaults to 5.0):
Encourages the model to align outputs with prompts. A higher value may decrease image quality.
negative_prompt (`str` or `List[str]`, *optional*):
Prompts indicating what to exclude in generation. If not specified, use `negative_prompt_embeds`.
num_images_per_prompt (`int`, *optional*, defaults to 1):
Number of images to generate for each prompt.
eta (`float`, *optional*, defaults to 0.0):
Applies to DDIM scheduling. Controlled by the eta parameter from the related literature.
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
A generator to ensure reproducibility in image generation.
latents (`torch.Tensor`, *optional*):
Predefined latent tensors to condition generation.
prompt_embeds (`torch.Tensor`, *optional*):
Text embeddings for the prompts. Overrides prompt string inputs for more flexibility.
negative_prompt_embeds (`torch.Tensor`, *optional*):
Embeddings for negative prompts. Overrides string inputs if defined.
prompt_attention_mask (`torch.Tensor`, *optional*):
Attention mask for the primary prompt embeddings.
negative_prompt_attention_mask (`torch.Tensor`, *optional*):
Attention mask for negative prompt embeddings.
output_type (`str`, *optional*, defaults to "latent"):
Format of the generated output, either as a PIL image or as a NumPy array.
return_dict (`bool`, *optional*, defaults to `True`):
If `True`, returns a structured output. Otherwise returns a simple tuple.
callback_on_step_end (`Callable`, *optional*):
Functions called at the end of each denoising step.
callback_on_step_end_tensor_inputs (`List[str]`, *optional*):
Tensor names to be included in callback function calls.
guidance_rescale (`float`, *optional*, defaults to 0.0):
Adjusts noise levels based on guidance scale.
original_size (`Tuple[int, int]`, *optional*, defaults to `(1024, 1024)`):
Original dimensions of the output.
target_size (`Tuple[int, int]`, *optional*):
Desired output dimensions for calculations.
crops_coords_top_left (`Tuple[int, int]`, *optional*, defaults to `(0, 0)`):
Coordinates for cropping.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.EasyAnimatePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/easyanimate/pipeline_easyanimate.py#L241</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **dtype** (`torch.dtype`) --
  torch dtype
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **max_sequence_length** (`int`, *optional*) -- maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## EasyAnimatePipelineOutput[[diffusers.pipelines.easyanimate.pipeline_output.EasyAnimatePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.easyanimate.pipeline_output.EasyAnimatePipelineOutput</name><anchor>diffusers.pipelines.easyanimate.pipeline_output.EasyAnimatePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/easyanimate/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for EasyAnimate pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/easyanimate.md" />

### Kandinsky 2.2
https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky_v22.md

# Kandinsky 2.2

Kandinsky 2.2 is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Vladimir Arkhipkin](https://github.com/oriBetelgeuse), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey), and [Denis Dimitrov](https://github.com/denndimitrov).

The description from it's GitHub page is:

*Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing the model's overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation.*

The original codebase can be found at [ai-forever/Kandinsky-2](https://github.com/ai-forever/Kandinsky-2).

> [!TIP]
> Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.

> [!TIP]
> Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## KandinskyV22PriorPipeline[[diffusers.KandinskyV22PriorPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22PriorPipeline</name><anchor>diffusers.KandinskyV22PriorPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py#L89</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "scheduler", "val": ": UnCLIPScheduler"}, {"name": "image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **image_processor** (`CLIPImageProcessor`) --
  A image_processor to be used to preprocess image from clip.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating image prior for Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22PriorPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py#L375</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **output_type** (`str`, *optional*, defaults to `"pt"`) --
  The output format of the generate image. Choose between: `"np"` (`np.array`) or `"pt"`
  (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22PriorPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline
>>> import torch

>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior")
>>> pipe_prior.to("cuda")
>>> prompt = "red cat, 4k photo"
>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple()

>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder")
>>> pipe.to("cuda")
>>> image = pipe(
...     image_embeds=image_emb,
...     negative_image_embeds=negative_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=50,
... ).images
>>> image[0].save("cat.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>interpolate</name><anchor>diffusers.KandinskyV22PriorPipeline.interpolate</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py#L136</source><parameters>[{"name": "images_and_prompts", "val": ": typing.List[typing.Union[str, PIL.Image.Image, torch.Tensor]]"}, {"name": "weights", "val": ": typing.List[float]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prior_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "device", "val": " = None"}]</parameters><paramsdesc>- **images_and_prompts** (`List[Union[str, PIL.Image.Image, torch.Tensor]]`) --
  list of prompts and images to guide the image generation.
- **weights** -- (`List[float]`):
  list of weights for each condition in `images_and_prompts`
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **negative_prior_prompt** (`str`, *optional*) --
  The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when using the prior pipeline for interpolation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22PriorPipeline.interpolate.example">

Examples:
```py
>>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
>>> from diffusers.utils import load_image
>>> import PIL
>>> import torch
>>> from torchvision import transforms

>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")
>>> img1 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... )
>>> img2 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/starry_night.jpeg"
... )
>>> images_texts = ["a cat", img1, img2]
>>> weights = [0.3, 0.3, 0.4]
>>> out = pipe_prior.interpolate(images_texts, weights)
>>> pipe = KandinskyV22Pipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> image = pipe(
...     image_embeds=out.image_embeds,
...     negative_image_embeds=out.negative_image_embeds,
...     height=768,
...     width=768,
...     num_inference_steps=50,
... ).images[0]
>>> image.save("starry_cat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyV22Pipeline[[diffusers.KandinskyV22Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22Pipeline</name><anchor>diffusers.KandinskyV22Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py#L72</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py#L130</source><parameters>[{"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22Pipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline
>>> import torch

>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior")
>>> pipe_prior.to("cuda")
>>> prompt = "red cat, 4k photo"
>>> out = pipe_prior(prompt)
>>> image_emb = out.image_embeds
>>> zero_image_emb = out.negative_image_embeds
>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder")
>>> pipe.to("cuda")
>>> image = pipe(
...     image_embeds=image_emb,
...     negative_image_embeds=zero_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=50,
... ).images
>>> image[0].save("cat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyV22CombinedPipeline[[diffusers.KandinskyV22CombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22CombinedPipeline</name><anchor>diffusers.KandinskyV22CombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L107</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **prior_image_processor** (`CLIPImageProcessor`) --
  A image_processor to be used to preprocess image from clip.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for text-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22CombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L202</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "prior_callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "prior_callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **prior_callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference of the prior pipeline.
  The function is called with the following arguments: `prior_callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`.
- **prior_callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
  list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
  the `._callback_tensor_inputs` attribute of your prior pipeline class.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference of the decoder pipeline.
  The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline,
  step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors
  as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22CombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForText2Image
import torch

pipe = AutoPipelineForText2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"

image = pipe(prompt=prompt, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyV22CombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L182</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
`torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
Note that offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.


</div></div>

## KandinskyV22ControlnetPipeline[[diffusers.KandinskyV22ControlnetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22ControlnetPipeline</name><anchor>diffusers.KandinskyV22ControlnetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py#L115</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22ControlnetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py#L160</source><parameters>[{"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "hint", "val": ": Tensor"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **hint** (`torch.Tensor`) --
  The controlnet condition.
- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



Examples:






</div></div>

## KandinskyV22PriorEmb2EmbPipeline[[diffusers.KandinskyV22PriorEmb2EmbPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22PriorEmb2EmbPipeline</name><anchor>diffusers.KandinskyV22PriorEmb2EmbPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py#L107</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "scheduler", "val": ": UnCLIPScheduler"}, {"name": "image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating image prior for Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22PriorEmb2EmbPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py#L401</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor], PIL.Image.Image, typing.List[PIL.Image.Image]]"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Conceptually, indicates how much to transform the reference `emb`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added.
- **emb** (`torch.Tensor`) --
  The image embedding.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **output_type** (`str`, *optional*, defaults to `"pt"`) --
  The output format of the generate image. Choose between: `"np"` (`np.array`) or `"pt"`
  (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22PriorEmb2EmbPipeline.__call__.example">

Examples:
```py
>>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline
>>> import torch

>>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")

>>> prompt = "red cat, 4k photo"
>>> img = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... )
>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple()

>>> pipe = KandinskyPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16"
... )
>>> pipe.to("cuda")

>>> image = pipe(
...     image_embeds=image_emb,
...     negative_image_embeds=negative_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=100,
... ).images

>>> image[0].save("cat.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>interpolate</name><anchor>diffusers.KandinskyV22PriorEmb2EmbPipeline.interpolate</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py#L160</source><parameters>[{"name": "images_and_prompts", "val": ": typing.List[typing.Union[str, PIL.Image.Image, torch.Tensor]]"}, {"name": "weights", "val": ": typing.List[float]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prior_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "device", "val": " = None"}]</parameters><paramsdesc>- **images_and_prompts** (`List[Union[str, PIL.Image.Image, torch.Tensor]]`) --
  list of prompts and images to guide the image generation.
- **weights** -- (`List[float]`):
  list of weights for each condition in `images_and_prompts`
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **negative_prior_prompt** (`str`, *optional*) --
  The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
  `guidance_scale` is less than `1`).
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.</paramsdesc><paramgroups>0</paramgroups><rettype>`KandinskyPriorPipelineOutput` or `tuple`</rettype></docstring>

Function invoked when using the prior pipeline for interpolation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22PriorEmb2EmbPipeline.interpolate.example">

Examples:
```py
>>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline
>>> from diffusers.utils import load_image
>>> import PIL

>>> import torch
>>> from torchvision import transforms

>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
... )
>>> pipe_prior.to("cuda")

>>> img1 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... )

>>> img2 = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/starry_night.jpeg"
... )

>>> images_texts = ["a cat", img1, img2]
>>> weights = [0.3, 0.3, 0.4]
>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights)

>>> pipe = KandinskyV22Pipeline.from_pretrained(
...     "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> image = pipe(
...     image_embeds=image_emb,
...     negative_image_embeds=zero_image_emb,
...     height=768,
...     width=768,
...     num_inference_steps=150,
... ).images[0]

>>> image.save("starry_cat.png")
```

</ExampleCodeBlock>







</div></div>

## KandinskyV22Img2ImgPipeline[[diffusers.KandinskyV22Img2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22Img2ImgPipeline</name><anchor>diffusers.KandinskyV22Img2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py#L78</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22Img2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py#L183</source><parameters>[{"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



Examples:






</div></div>

## KandinskyV22Img2ImgCombinedPipeline[[diffusers.KandinskyV22Img2ImgCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22Img2ImgCombinedPipeline</name><anchor>diffusers.KandinskyV22Img2ImgCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L335</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **prior_image_processor** (`CLIPImageProcessor`) --
  A image_processor to be used to preprocess image from clip.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for image-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22Img2ImgCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L440</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "prior_callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "prior_callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22Img2ImgCombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForImage2Image
import torch
import requests
from io import BytesIO
from PIL import Image
import os

pipe = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
image.thumbnail((768, 768))

image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.KandinskyV22Img2ImgCombinedPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L410</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyV22Img2ImgCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L420</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
`torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
Note that offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.


</div></div>

## KandinskyV22ControlnetImg2ImgPipeline[[diffusers.KandinskyV22ControlnetImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22ControlnetImg2ImgPipeline</name><anchor>diffusers.KandinskyV22ControlnetImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py#L107</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22ControlnetImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py#L200</source><parameters>[{"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "hint", "val": ": Tensor"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- **hint** (`torch.Tensor`) --
  The controlnet condition.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



Examples:






</div></div>

## KandinskyV22InpaintPipeline[[diffusers.KandinskyV22InpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22InpaintPipeline</name><anchor>diffusers.KandinskyV22InpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py#L243</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters><paramsdesc>- **scheduler** ([DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image inpainting using Kandinsky2.1

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22InpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py#L302</source><parameters>[{"name": "image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image]"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, numpy.ndarray]"}, {"name": "negative_image_embeds", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for text prompt, that will be used to condition the image generation.
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`np.array`) --
  Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
  black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single
  channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3,
  so the expected shape would be `(B, H, W, 1)`.
- **negative_image_embeds** (`torch.Tensor` or `List[torch.Tensor]`) --
  The clip image embeddings for negative text prompt, will be used to condition the image generation.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



Examples:






</div></div>

## KandinskyV22InpaintCombinedPipeline[[diffusers.KandinskyV22InpaintCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KandinskyV22InpaintCombinedPipeline</name><anchor>diffusers.KandinskyV22InpaintCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L584</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}, {"name": "prior_prior", "val": ": PriorTransformer"}, {"name": "prior_image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_scheduler", "val": ": UnCLIPScheduler"}, {"name": "prior_image_processor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **scheduler** (Union[`DDIMScheduler`,`DDPMScheduler`]) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **movq** ([VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel)) --
  MoVQ Decoder to generate the image from the latents.
- **prior_prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_image_encoder** (`CLIPVisionModelWithProjection`) --
  Frozen image-encoder.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **prior_tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **prior_scheduler** (`UnCLIPScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **prior_image_processor** (`CLIPImageProcessor`) --
  A image_processor to be used to preprocess image from clip.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for inpainting generation using Kandinsky

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KandinskyV22InpaintCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L679</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "prior_callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "prior_callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded
  again.
- **mask_image** (`np.array`) --
  Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while
  black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single
  channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3,
  so the expected shape would be `(B, H, W, 1)`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **prior_callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep:
  int, callback_kwargs: Dict)`.
- **prior_callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
  list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
  the `._callback_tensor_inputs` attribute of your pipeline class.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KandinskyV22InpaintCombinedPipeline.__call__.example">

Examples:
```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch
import numpy as np

pipe = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

original_image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
)

mask = np.zeros((768, 768), dtype=np.float32)
# Let's mask out an area above the cat's head
mask[:250, 250:-250] = 1

image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.KandinskyV22InpaintCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py#L659</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
`torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
Note that offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kandinsky_v22.md" />

### Perturbed-Attention Guidance
https://huggingface.co/docs/diffusers/main/api/pipelines/pag.md

# Perturbed-Attention Guidance

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules.

PAG was introduced in [Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance](https://huggingface.co/papers/2403.17377) by Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin and Seungryong Kim.

The abstract from the paper is:

*Recent studies have demonstrated that diffusion models are capable of generating high-quality samples, but their quality heavily depends on sampling guidance techniques, such as classifier guidance (CG) and classifier-free guidance (CFG). These techniques are often not applicable in unconditional generation or in various downstream tasks such as image restoration. In this paper, we propose a novel sampling guidance, called Perturbed-Attention Guidance (PAG), which improves diffusion sample quality across both unconditional and conditional settings, achieving this without requiring additional training or the integration of external modules. PAG is designed to progressively enhance the structure of samples throughout the denoising process. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, by considering the self-attention mechanisms' ability to capture structural information, and guiding the denoising process away from these degraded samples. In both ADM and Stable Diffusion, PAG surprisingly improves sample quality in conditional and even unconditional scenarios. Moreover, PAG significantly improves the baseline performance in various downstream tasks where existing guidances such as CG or CFG cannot be fully utilized, including ControlNet with empty prompts and image restoration such as inpainting and deblurring.*

PAG can be used by specifying the `pag_applied_layers` as a parameter when instantiating a PAG pipeline. It can be a single string or a list of strings. Each string can be a unique layer identifier or a regular expression to identify one or more layers.

- Full identifier as a normal string: `down_blocks.2.attentions.0.transformer_blocks.0.attn1.processor`
- Full identifier as a RegEx: `down_blocks.2.(attentions|motion_modules).0.transformer_blocks.0.attn1.processor`
- Partial identifier as a RegEx: `down_blocks.2`, or `attn1`
- List of identifiers (can be combo of strings and ReGex): `["blocks.1", "blocks.(14|20)", r"down_blocks\.(2,3)"]`

> [!WARNING]
> Since RegEx is supported as a way for matching layer identifiers, it is crucial to use it correctly otherwise there might be unexpected behaviour. The recommended way to use PAG is by specifying layers as `blocks.{layer_index}` and `blocks.({layer_index_1|layer_index_2|...})`. Using it in any other way, while doable, may bypass our basic validation checks and give you unexpected results.

## AnimateDiffPAGPipeline[[diffusers.AnimateDiffPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffPAGPipeline</name><anchor>diffusers.AnimateDiffPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_animatediff.py#L89</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid_block.*attn1'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using
[AnimateDiff](https://huggingface.co/docs/diffusers/en/api/pipelines/animatediff) and [Perturbed Attention
Guidance](https://huggingface.co/docs/diffusers/en/using-diffusers/pag).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_animatediff.py#L575</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = 16"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "decode_chunk_size", "val": ": int = 16"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AnimateDiffPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AnimateDiffPAGPipeline, MotionAdapter, DDIMScheduler
>>> from diffusers.utils import export_to_gif

>>> model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
>>> motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-2"
>>> motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id)
>>> scheduler = DDIMScheduler.from_pretrained(
...     model_id, subfolder="scheduler", beta_schedule="linear", steps_offset=1, clip_sample=False
... )
>>> pipe = AnimateDiffPAGPipeline.from_pretrained(
...     model_id,
...     motion_adapter=motion_adapter,
...     scheduler=scheduler,
...     pag_applied_layers=["mid"],
...     torch_dtype=torch.float16,
... ).to("cuda")

>>> video = pipe(
...     prompt="car, futuristic cityscape with neon lights, street, no human",
...     negative_prompt="low quality, bad quality",
...     num_inference_steps=25,
...     guidance_scale=6.0,
...     pag_scale=3.0,
...     generator=torch.Generator().manual_seed(42),
... ).frames[0]

>>> export_to_gif(video, "animatediff_pag.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_animatediff.py#L165</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## HunyuanDiTPAGPipeline[[diffusers.HunyuanDiTPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanDiTPAGPipeline</name><anchor>diffusers.HunyuanDiTPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_hunyuandit.py#L152</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": BertModel"}, {"name": "tokenizer", "val": ": BertTokenizer"}, {"name": "transformer", "val": ": HunyuanDiT2DModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": typing.Optional[diffusers.pipelines.stable_diffusion.safety_checker.StableDiffusionSafetyChecker] = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "text_encoder_2", "val": ": typing.Optional[transformers.models.t5.modeling_t5.T5EncoderModel] = None"}, {"name": "tokenizer_2", "val": ": typing.Optional[transformers.models.mt5.tokenization_mt5.MT5Tokenizer] = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'blocks.1'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. We use
  `sdxl-vae-fp16-fix`.
- **text_encoder** (Optional[`~transformers.BertModel`, `~transformers.CLIPTextModel`]) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  HunyuanDiT uses a fine-tuned [bilingual CLIP].
- **tokenizer** (Optional[`~transformers.BertTokenizer`, `~transformers.CLIPTokenizer`]) --
  A `BertTokenizer` or `CLIPTokenizer` to tokenize text.
- **transformer** ([HunyuanDiT2DModel](/docs/diffusers/main/en/api/models/hunyuan_transformer2d#diffusers.HunyuanDiT2DModel)) --
  The HunyuanDiT model designed by Tencent Hunyuan.
- **text_encoder_2** (`T5EncoderModel`) --
  The mT5 embedder. Specifically, it is 't5-v1_1-xxl'.
- **tokenizer_2** (`MT5Tokenizer`) --
  The tokenizer for the mT5 embedder.
- **scheduler** ([DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)) --
  A scheduler to be used in combination with HunyuanDiT to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for English/Chinese-to-image generation using HunyuanDiT and [Perturbed Attention
Guidance](https://huggingface.co/docs/diffusers/en/using-diffusers/pag).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

HunyuanDiT uses two text encoders: [mT5](https://huggingface.co/google/mt5-base) and [bilingual CLIP](fine-tuned by
ourselves)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HunyuanDiTPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_hunyuandit.py#L579</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = (1024, 1024)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`) --
  The height in pixels of the generated image.
- **width** (`int`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds_2` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **negative_prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds_2` is passed directly.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback_on_step_end** (`Callable[[int, int, Dict], None]`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A callback function or a list of callback functions to be called at the end of each denoising step.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  A list of tensor inputs that should be passed to the callback function. If not defined, all tensor
  inputs will be passed.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Rescale the noise_cfg according to `guidance_rescale`. Based on findings of [Common Diffusion Noise
  Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). See Section 3.4
- **original_size** (`Tuple[int, int]`, *optional*, defaults to `(1024, 1024)`) --
  The original size of the image. Used to calculate the time ids.
- **target_size** (`Tuple[int, int]`, *optional*) --
  The target size of the image. Used to calculate the time ids.
- **crops_coords_top_left** (`Tuple[int, int]`, *optional*, defaults to `(0, 0)`) --
  The top left coordinates of the crop. Used to calculate the time ids.
- **use_resolution_binning** (`bool`, *optional*, defaults to `True`) --
  Whether to use resolution binning or not. If `True`, the input resolution will be mapped to the closest
  standard resolution. Supported resolutions are 1024x1024, 1280x1280, 1024x768, 1152x864, 1280x960,
  768x1024, 864x1152, 960x1280, 1280x768, and 768x1280. It is recommended to set this to `True`.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation with HunyuanDiT.



<ExampleCodeBlock anchor="diffusers.HunyuanDiTPAGPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers",
...     torch_dtype=torch.float16,
...     enable_pag=True,
...     pag_applied_layers=[14],
... ).to("cuda")

>>> # prompt = "an astronaut riding a horse"
>>> prompt = "一个宇航员在骑马"
>>> image = pipe(prompt, guidance_scale=4, pag_scale=3).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.HunyuanDiTPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_hunyuandit.py#L258</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "device", "val": ": device = None"}, {"name": "dtype", "val": ": dtype = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": typing.Optional[int] = None"}, {"name": "text_encoder_index", "val": ": int = 0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **dtype** (`torch.dtype`) --
  torch dtype
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **max_sequence_length** (`int`, *optional*) -- maximum sequence length to use for the prompt.
- **text_encoder_index** (`int`, *optional*) --
  Index of the text encoder to use. `0` for clip and `1` for T5.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## KolorsPAGPipeline[[diffusers.KolorsPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KolorsPAGPipeline</name><anchor>diffusers.KolorsPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_kolors.py#L129</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": ChatGLMModel"}, {"name": "tokenizer", "val": ": ChatGLMTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = False"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`ChatGLMModel`) --
  Frozen text-encoder. Kolors uses [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b).
- **tokenizer** (`ChatGLMTokenizer`) --
  Tokenizer of class
  [ChatGLMTokenizer](https://huggingface.co/THUDM/chatglm3-6b/blob/main/tokenization_chatglm.py).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"False"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `Kwai-Kolors/Kolors-diffusers`.
- **pag_applied_layers** (`str` or `List[str]``, *optional*, defaults to `"mid"`) --
  Set the transformer attention layers where to apply the perturbed attention guidance. Can be a string or a
  list of strings with "down", "mid", "up", a whole transformer block or specific transformer block attention
  layers, e.g.:
  ["mid"] ["down", "mid"] ["down", "mid", "up.block_1"] ["down", "mid", "up.block_1.attentions_0",
  "up.block_1.attentions_1"]</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kolors.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.KolorsPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_kolors.py#L676</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [Kwai-Kolors/Kolors-diffusers](https://huggingface.co/Kwai-Kolors/Kolors-diffusers) and checkpoints
  that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [Kwai-Kolors/Kolors-diffusers](https://huggingface.co/Kwai-Kolors/Kolors-diffusers) and checkpoints
  that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.kolors.KolorsPipelineOutput` instead of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.kolors.KolorsPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.kolors.KolorsPipelineOutput` if
`return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.KolorsPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "Kwai-Kolors/Kolors-diffusers",
...     variant="fp16",
...     torch_dtype=torch.float16,
...     enable_pag=True,
...     pag_applied_layers=["down.block_2.attentions_1", "up.block_0.attentions_1"],
... )
>>> pipe = pipe.to("cuda")

>>> prompt = (
...     "A photo of a ladybug, macro, zoom, high quality, film, holding a wooden sign with the text 'KOLORS'"
... )
>>> image = pipe(prompt, guidance_scale=5.5, pag_scale=1.5).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.KolorsPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_kolors.py#L217</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.KolorsPAGPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_kolors.py#L619</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPAGInpaintPipeline[[diffusers.StableDiffusionPAGInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPAGInpaintPipeline</name><anchor>diffusers.StableDiffusionPAGInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_inpaint.py#L184</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPAGInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_inpaint.py#L910</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPAGInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForInpainting

>>> pipe = AutoPipelineForInpainting.from_pretrained(
...     "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, enable_pag=True
... )
>>> pipe = pipe.to("cuda")
>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> init_image = load_image(img_url).convert("RGB")
>>> mask_image = load_image(mask_url).convert("RGB")
>>> prompt = "A majestic tiger sitting on a bench"
>>> image = pipe(
...     prompt=prompt,
...     image=init_image,
...     mask_image=mask_image,
...     strength=0.8,
...     num_inference_steps=50,
...     guidance_scale=guidance_scale,
...     generator=generator,
...     pag_scale=pag_scale,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionPAGInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_inpaint.py#L334</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionPAGInpaintPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_inpaint.py#L849</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPAGPipeline[[diffusers.StableDiffusionPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPAGPipeline</name><anchor>diffusers.StableDiffusionPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd.py#L157</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd.py#L745</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, enable_pag=True
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, pag_scale=0.3).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd.py#L304</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionPAGPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd.py#L684</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPAGImg2ImgPipeline[[diffusers.StableDiffusionPAGImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPAGImg2ImgPipeline</name><anchor>diffusers.StableDiffusionPAGImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_img2img.py#L152</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPAGImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_img2img.py#L782</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPAGImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForImage2Image
>>> from diffusers.utils import load_image

>>> pipe = AutoPipelineForImage2Image.from_pretrained(
...     "runwayml/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     enable_pag=True,
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"

>>> init_image = load_image(url).convert("RGB")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, image=init_image, pag_scale=0.3).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionPAGImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_img2img.py#L299</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionPAGImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_img2img.py#L725</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionControlNetPAGPipeline[[diffusers.StableDiffusionControlNetPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetPAGPipeline</name><anchor>diffusers.StableDiffusionControlNetPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd.py#L165</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd.py#L272</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionControlNetPAGPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd.py#L807</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionControlNetPAGInpaintPipeline[[diffusers.StableDiffusionControlNetPAGInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetPAGInpaintPipeline</name><anchor>diffusers.StableDiffusionControlNetPAGInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_inpaint.py#L131</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
  about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image inpainting using Stable Diffusion with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters

> [!TIP] > This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting >
([runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting)) as well as >
default text-to-image Stable Diffusion checkpoints >
([runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)). Default text-to-image >
Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as >
[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionControlNetPAGInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_inpaint.py#L968</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.5"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, --
  `List[PIL.Image.Image]`, or `List[np.ndarray]`):
  `Image`, NumPy array or tensor representing an image batch to be used as the starting point. For both
  NumPy array and PyTorch tensor, the expected value range is between `[0, 1]`. If it's a tensor or a
  list or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a NumPy array or
  a list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`. It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, --
  `List[PIL.Image.Image]`, or `List[np.ndarray]`):
  `Image`, NumPy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a NumPy array or PyTorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for PyTorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for NumPy array, it would be for `(B, H, W, 1)`, `(B, H, W)`, `(H,
  W, 1)`, or `(H, W)`.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, --
  `List[List[torch.Tensor]]`, or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 0.5) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPAGInpaintPipeline.__call__.example">

Examples:
```py
>>> # !pip install transformers accelerate
>>> import cv2
>>> from diffusers import AutoPipelineForInpainting, ControlNetModel, DDIMScheduler
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> from PIL import Image
>>> import torch

>>> init_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png"
... )
>>> init_image = init_image.resize((512, 512))

>>> generator = torch.Generator(device="cpu").manual_seed(1)

>>> mask_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png"
... )
>>> mask_image = mask_image.resize((512, 512))


>>> def make_canny_condition(image):
...     image = np.array(image)
...     image = cv2.Canny(image, 100, 200)
...     image = image[:, :, None]
...     image = np.concatenate([image, image, image], axis=2)
...     image = Image.fromarray(image)
...     return image


>>> control_image = make_canny_condition(init_image)

>>> controlnet = ControlNetModel.from_pretrained(
...     "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16
... )
>>> pipe = AutoPipelineForInpainting.from_pretrained(
...     "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, enable_pag=True
... )

>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
>>> pipe.enable_model_cpu_offload()

>>> # generate image
>>> image = pipe(
...     "a handsome man with ray-ban sunglasses",
...     num_inference_steps=20,
...     generator=generator,
...     eta=1.0,
...     image=init_image,
...     mask_image=mask_image,
...     control_image=control_image,
...     pag_scale=0.3,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetPAGInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_inpaint.py#L247</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionControlNetPAGInpaintPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_inpaint.py#L915</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLPAGPipeline[[diffusers.StableDiffusionXLPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLPAGPipeline</name><anchor>diffusers.StableDiffusionXLPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L176</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L848</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     torch_dtype=torch.float16,
...     enable_pag=True,
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, pag_scale=0.3).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L297</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLPAGPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L783</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLPAGImg2ImgPipeline[[diffusers.StableDiffusionXLPAGImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLPAGImg2ImgPipeline</name><anchor>diffusers.StableDiffusionXLPAGImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py#L194</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires an `aesthetic_score` condition to be passed during inference. Also see the
  config of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLPAGImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py#L999</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`) --
  The image(s) to modify with the pipeline.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. Note that in the case of
  `denoising_start` being declared as an integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLPAGImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForImage2Image
>>> from diffusers.utils import load_image

>>> pipe = AutoPipelineForImage2Image.from_pretrained(
...     "stabilityai/stable-diffusion-xl-refiner-1.0",
...     torch_dtype=torch.float16,
...     enable_pag=True,
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"

>>> init_image = load_image(url).convert("RGB")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, image=init_image, pag_scale=0.3).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLPAGImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py#L314</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLPAGImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py#L930</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLPAGInpaintPipeline[[diffusers.StableDiffusionXLPAGInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLPAGInpaintPipeline</name><anchor>diffusers.StableDiffusionXLPAGInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_inpaint.py#L207</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires a aesthetic_score condition to be passed during inference. Also see the config
  of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLPAGInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_inpaint.py#L1090</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 0.9999) --
  Conceptually, indicates how much to transform the masked portion of the reference `image`. Must be
  between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the
  `strength`. The number of denoising steps depends on the amount of noise initially added. When
  `strength` is 1, added noise will be maximum and the denoising process will run for the full number of
  iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores the masked
  portion of the reference `image`. Note that in the case of `denoising_start` being declared as an
  integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. `tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLPAGInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForInpainting
>>> from diffusers.utils import load_image

>>> pipe = AutoPipelineForInpainting.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     torch_dtype=torch.float16,
...     variant="fp16",
...     enable_pag=True,
... )
>>> pipe.to("cuda")

>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

>>> init_image = load_image(img_url).convert("RGB")
>>> mask_image = load_image(mask_url).convert("RGB")

>>> prompt = "A majestic tiger sitting on a bench"
>>> image = pipe(
...     prompt=prompt,
...     image=init_image,
...     mask_image=mask_image,
...     num_inference_steps=50,
...     strength=0.80,
...     pag_scale=0.3,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLPAGInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_inpaint.py#L404</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLPAGInpaintPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_inpaint.py#L1021</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLControlNetPAGPipeline[[diffusers.StableDiffusionXLControlNetPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetPAGPipeline</name><anchor>diffusers.StableDiffusionXLControlNetPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py#L188</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_encoder_2** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Second frozen text-encoder
  ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **tokenizer_2** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings should always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark](https://github.com/ShieldMnt/invisible-watermark/) library to
  watermark output images. If not defined, it defaults to `True` if the package is installed; otherwise no
  watermarker is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py#L1013</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetPAGPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import AutoPipelineForText2Image, ControlNetModel, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )

>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5  # recommended for good generalization
>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     controlnet=controlnet,
...     vae=vae,
...     torch_dtype=torch.float16,
...     enable_pag=True,
... )
>>> pipe.enable_model_cpu_offload()

>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)

>>> # generate image
>>> image = pipe(
...     prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image, pag_scale=0.3
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py#L309</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLControlNetPAGPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py#L956</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLControlNetPAGImg2ImgPipeline[[diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline</name><anchor>diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl_img2img.py#L168</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'mid'"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets
  as a list, the outputs from each ControlNet are added together to create one combined additional
  conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires an `aesthetic_score` condition to be passed during inference. Also see the
  config of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl_img2img.py#L1091</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.8"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The initial image will be used as the starting point for the image generation process. Can also accept
  image latents as `image`, if passing latents directly, it will not be encoded again.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
  the type is specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also
  be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If height
  and/or width are passed, `image` is resized according to them. If multiple ControlNets are specified in
  init, images must be passed as a list such that each element of the list can be correctly batched for
  input to a single controlnet.
- **height** (`int`, *optional*, defaults to the size of control_image) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to the size of control_image) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
  corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
  you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the controlnet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the controlnet stops applying.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple` containing the output images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline.__call__.example">

Examples:
```py
>>> # pip install accelerate transformers safetensors diffusers

>>> import torch
>>> import numpy as np
>>> from PIL import Image

>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation
>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetPAGImg2ImgPipeline, AutoencoderKL
>>> from diffusers.utils import load_image


>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda")
>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas")
>>> controlnet = ControlNetModel.from_pretrained(
...     "diffusers/controlnet-depth-sdxl-1.0-small",
...     variant="fp16",
...     use_safetensors="True",
...     torch_dtype=torch.float16,
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetPAGImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     controlnet=controlnet,
...     vae=vae,
...     variant="fp16",
...     use_safetensors=True,
...     torch_dtype=torch.float16,
...     enable_pag=True,
... )
>>> pipe.enable_model_cpu_offload()


>>> def get_depth_map(image):
...     image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda")
...     with torch.no_grad(), torch.autocast("cuda"):
...         depth_map = depth_estimator(image).predicted_depth

...     depth_map = torch.nn.functional.interpolate(
...         depth_map.unsqueeze(1),
...         size=(1024, 1024),
...         mode="bicubic",
...         align_corners=False,
...     )
...     depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
...     depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
...     depth_map = (depth_map - depth_min) / (depth_max - depth_min)
...     image = torch.cat([depth_map] * 3, dim=1)
...     image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
...     image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
...     return image


>>> prompt = "A robot, 4k photo"
>>> image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
...     "/kandinsky/cat.png"
... ).resize((1024, 1024))
>>> controlnet_conditioning_scale = 0.5  # recommended for good generalization
>>> depth_image = get_depth_map(image)

>>> images = pipe(
...     prompt,
...     image=image,
...     control_image=depth_image,
...     strength=0.99,
...     num_inference_steps=50,
...     controlnet_conditioning_scale=controlnet_conditioning_scale,
... ).images
>>> images[0].save(f"robot_cat.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetPAGImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl_img2img.py#L301</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusion3PAGPipeline[[diffusers.StableDiffusion3PAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusion3PAGPipeline</name><anchor>diffusers.StableDiffusion3PAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3.py#L136</source><parameters>[{"name": "transformer", "val": ": SD3Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5TokenizerFast"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'blocks.1'"}]</parameters><paramsdesc>- **transformer** ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
  with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
  as its dimension.
- **text_encoder_2** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **text_encoder_3** (`T5EncoderModel`) --
  Frozen text-encoder. Stable Diffusion 3 uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_3** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

[PAG pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/pag) for text-to-image generation
using Stable Diffusion 3.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusion3PAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3.py#L684</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusion3PAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers",
...     torch_dtype=torch.float16,
...     enable_pag=True,
...     pag_applied_layers=["blocks.13"],
... )
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> image = pipe(prompt, guidance_scale=5.0, pag_scale=0.7).images[0]
>>> image.save("sd3_pag.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusion3PAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3.py#L335</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## StableDiffusion3PAGImg2ImgPipeline[[diffusers.StableDiffusion3PAGImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusion3PAGImg2ImgPipeline</name><anchor>diffusers.StableDiffusion3PAGImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3_img2img.py#L152</source><parameters>[{"name": "transformer", "val": ": SD3Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5TokenizerFast"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'blocks.1'"}]</parameters><paramsdesc>- **transformer** ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
  with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
  as its dimension.
- **text_encoder_2** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **text_encoder_3** (`T5EncoderModel`) --
  Frozen text-encoder. Stable Diffusion 3 uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_3** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

[PAG pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/pag) for image-to-image generation
using Stable Diffusion 3.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusion3PAGImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3_img2img.py#L735</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusion3PAGImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusion3PAGImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = StableDiffusion3PAGImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers",
...     torch_dtype=torch.float16,
...     pag_applied_layers=["blocks.13"],
... )
>>> pipe.to("cuda")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
>>> init_image = load_image(url).convert("RGB")
>>> image = pipe(prompt, image=init_image, guidance_scale=5.0, pag_scale=0.7).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusion3PAGImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_3_img2img.py#L351</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## PixArtSigmaPAGPipeline[[diffusers.PixArtSigmaPAGPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PixArtSigmaPAGPipeline</name><anchor>diffusers.PixArtSigmaPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_pixart_sigma.py#L144</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": PixArtTransformer2DModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'blocks.1'"}]</parameters></docstring>

[PAG pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/pag) for text-to-image generation
using PixArt-Sigma.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.PixArtSigmaPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_pixart_sigma.py#L574</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **max_sequence_length** (`int` defaults to 300) -- Maximum sequence length to use with the `prompt`.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.PixArtSigmaPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoPipelineForText2Image

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
...     torch_dtype=torch.float16,
...     pag_applied_layers=["blocks.14"],
...     enable_pag=True,
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "A small cactus with a happy face in the Sahara desert"
>>> image = pipe(prompt, pag_scale=4.0, guidance_scale=1.0).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.PixArtSigmaPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_pixart_sigma.py#L190</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Alpha, it's should be the embeddings of the ""
  string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/pag.md" />

### Chroma
https://huggingface.co/docs/diffusers/main/api/pipelines/chroma.md

# Chroma

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

Chroma is a text to image generation model based on Flux.

Original model checkpoints for Chroma can be found [here](https://huggingface.co/lodestones/Chroma).

> [!TIP]
> Chroma can use all the same optimizations as Flux.

## Inference

The Diffusers version of Chroma is based on the [`unlocked-v37`](https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v37.safetensors) version of the original model, which is available in the [Chroma repository](https://huggingface.co/lodestones/Chroma).

```python
import torch
from diffusers import ChromaPipeline

pipe = ChromaPipeline.from_pretrained("lodestones/Chroma", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = [
    "A high-fashion close-up portrait of a blonde woman in clear sunglasses. The image uses a bold teal and red color split for dramatic lighting. The background is a simple teal-green. The photo is sharp and well-composed, and is designed for viewing with anaglyph 3D glasses for optimal effect. It looks professionally done."
]
negative_prompt =  ["low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"]

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    generator=torch.Generator("cpu").manual_seed(433),
    num_inference_steps=40,
    guidance_scale=3.0,
    num_images_per_prompt=1,
).images[0]
image.save("chroma.png")
```

## Loading from a single file

To use updated model checkpoints that are not in the Diffusers format, you can use the `ChromaTransformer2DModel` class to load the model from a single file in the original format. This is also useful when trying to load finetunes or quantized versions of the models that have been published by the community.

The following example demonstrates how to run Chroma from a single file.

Then run the following example

```python
import torch
from diffusers import ChromaTransformer2DModel, ChromaPipeline

model_id = "lodestones/Chroma"
dtype = torch.bfloat16

transformer = ChromaTransformer2DModel.from_single_file("https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v37.safetensors", torch_dtype=dtype)

pipe = ChromaPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=dtype)
pipe.enable_model_cpu_offload()

prompt = [
    "A high-fashion close-up portrait of a blonde woman in clear sunglasses. The image uses a bold teal and red color split for dramatic lighting. The background is a simple teal-green. The photo is sharp and well-composed, and is designed for viewing with anaglyph 3D glasses for optimal effect. It looks professionally done."
]
negative_prompt =  ["low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"]

image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    generator=torch.Generator("cpu").manual_seed(433),
    num_inference_steps=40,
    guidance_scale=3.0,
).images[0]

image.save("chroma-single-file.png")
```

## ChromaPipeline[[diffusers.ChromaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ChromaPipeline</name><anchor>diffusers.ChromaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L151</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": ChromaTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([ChromaTransformer2DModel](/docs/diffusers/main/en/api/models/chroma_transformer#diffusers.ChromaTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representation
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Chroma pipeline for text-to-image generation.

Reference: https://huggingface.co/lodestones/Chroma/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ChromaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L638</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 35"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  not greater than `1`).
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (torch.Tensor, *optional*) --
  Attention mask for the prompt embeddings. Used to mask out padding tokens in the prompt sequence.
  Chroma requires a single padding token remain unmasked. Please refer to
  https://huggingface.co/lodestones/Chroma#tldr-masking-t5-padding-tokens-enhanced-fidelity-and-increased-stability-during-training
- **negative_prompt_attention_mask** (torch.Tensor, *optional*) --
  Attention mask for the negative prompt embeddings. Used to mask out padding tokens in the negative
  prompt sequence. Chroma requires a single padding token remain unmasked. PLease refer to
  https://huggingface.co/lodestones/Chroma#tldr-masking-t5-padding-tokens-enhanced-fidelity-and-increased-stability-during-training
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.ChromaPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.chroma.ChromaPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.chroma.ChromaPipelineOutput` if
`return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.ChromaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import ChromaPipeline

>>> model_id = "lodestones/Chroma"
>>> ckpt_path = "https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v37.safetensors"
>>> transformer = ChromaTransformer2DModel.from_single_file(ckpt_path, torch_dtype=torch.bfloat16)
>>> pipe = ChromaPipeline.from_pretrained(
...     model_id,
...     transformer=transformer,
...     torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> prompt = [
...     "A high-fashion close-up portrait of a blonde woman in clear sunglasses. The image uses a bold teal and red color split for dramatic lighting. The background is a simple teal-green. The photo is sharp and well-composed, and is designed for viewing with anaglyph 3D glasses for optimal effect. It looks professionally done."
... ]
>>> negative_prompt = [
...     "low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"
... ]
>>> image = pipe(prompt, negative_prompt=negative_prompt).images[0]
>>> image.save("chroma.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.ChromaPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L520</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.ChromaPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L547</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.ChromaPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L507</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.ChromaPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L533</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.ChromaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma.py#L262</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## ChromaImg2ImgPipeline[[diffusers.ChromaImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ChromaImg2ImgPipeline</name><anchor>diffusers.ChromaImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L163</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": ChromaTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([ChromaTransformer2DModel](/docs/diffusers/main/en/api/models/chroma_transformer#diffusers.ChromaTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representation
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Chroma pipeline for image-to-image generation.

Reference: https://huggingface.co/lodestones/Chroma/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ChromaImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L699</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 35"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "strength", "val": ": float = 0.9"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[<built-in method tensor of type object at 0x7fed7fb836a0>] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  not greater than `1`).
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 35) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **strength** (`float, *optional*, defaults to 0.9) --
  Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image will
  be used as a starting point, adding more noise to it the larger the strength. The number of denoising
  steps depends on the amount of noise initially added. When strength is 1, added noise will be maximum
  and the denoising process will run for the full number of iterations specified in num_inference_steps.
  A value of 1, therefore, essentially ignores image.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (torch.Tensor, *optional*) --
  Attention mask for the prompt embeddings. Used to mask out padding tokens in the prompt sequence.
  Chroma requires a single padding token remain unmasked. Please refer to
  https://huggingface.co/lodestones/Chroma#tldr-masking-t5-padding-tokens-enhanced-fidelity-and-increased-stability-during-training
- **negative_prompt_attention_mask** (torch.Tensor, *optional*) --
  Attention mask for the negative prompt embeddings. Used to mask out padding tokens in the negative
  prompt sequence. Chroma requires a single padding token remain unmasked. PLease refer to
  https://huggingface.co/lodestones/Chroma#tldr-masking-t5-padding-tokens-enhanced-fidelity-and-increased-stability-during-training
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.ChromaPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.chroma.ChromaPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.chroma.ChromaPipelineOutput` if
`return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.ChromaImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import ChromaTransformer2DModel, ChromaImg2ImgPipeline

>>> model_id = "lodestones/Chroma"
>>> ckpt_path = "https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v37.safetensors"
>>> pipe = ChromaImg2ImgPipeline.from_pretrained(
...     model_id,
...     transformer=transformer,
...     torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> init_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
... )
>>> prompt = "a scenic fastasy landscape with a river and mountains in the background, vibrant colors, detailed, high resolution"
>>> negative_prompt = "low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"
>>> image = pipe(prompt, image=init_image, negative_prompt=negative_prompt).images[0]
>>> image.save("chroma-img2img.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.ChromaImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L554</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.ChromaImg2ImgPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L581</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.ChromaImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L541</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.ChromaImg2ImgPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L567</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.ChromaImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/chroma/pipeline_chroma_img2img.py#L291</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/chroma.md" />

### AudioLDM
https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm.md

# AudioLDM

AudioLDM was proposed in [AudioLDM: Text-to-Audio Generation with Latent Diffusion Models](https://huggingface.co/papers/2301.12503) by Haohe Liu et al. Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM
is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap)
latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional
sound effects, human speech and music.

The abstract from the paper is:

*Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at [this https URL](https://audioldm.github.io/).*

The original codebase can be found at [haoheliu/AudioLDM](https://github.com/haoheliu/AudioLDM).

## Tips

When constructing a prompt, keep in mind:

* Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific (for example, "water stream in a forest" instead of "stream").
* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.

During inference:

* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## AudioLDMPipeline[[diffusers.AudioLDMPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioLDMPipeline</name><anchor>diffusers.AudioLDMPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm/pipeline_audioldm.py#L60</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": ClapTextModelWithProjection"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.roberta.tokenization_roberta.RobertaTokenizer, transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast]"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "vocoder", "val": ": SpeechT5HifiGan"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([ClapTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clap#transformers.ClapTextModelWithProjection)) --
  Frozen text-encoder (`ClapTextModelWithProjection`, specifically the
  [laion/clap-htsat-unfused](https://huggingface.co/laion/clap-htsat-unfused) variant.
- **tokenizer** (`PreTrainedTokenizer`) --
  A [RobertaTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/roberta#transformers.RobertaTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded audio latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded audio latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **vocoder** ([SpeechT5HifiGan](https://huggingface.co/docs/transformers/main/en/model_doc/speecht5#transformers.SpeechT5HifiGan)) --
  Vocoder of class `SpeechT5HifiGan`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-audio generation using AudioLDM.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AudioLDMPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm/pipeline_audioldm.py#L360</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": int = 10"}, {"name": "guidance_scale", "val": ": float = 2.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_waveforms_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide audio generation. If not defined, you need to pass `prompt_embeds`.
- **audio_length_in_s** (`int`, *optional*, defaults to 5.12) --
  The length of the generated audio sample in seconds.
- **num_inference_steps** (`int`, *optional*, defaults to 10) --
  The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 2.5) --
  A higher guidance scale value encourages the model to generate audio that is closely linked to the text
  `prompt` at the expense of lower sound quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_waveforms_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of waveforms to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `"np"` to return a NumPy `np.ndarray` or
  `"pt"` to return a PyTorch `torch.Tensor` object.</paramsdesc><paramgroups>0</paramgroups><rettype>[AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AudioPipelineOutput](/docs/diffusers/main/en/api/pipelines/dance_diffusion#diffusers.AudioPipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated audio.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AudioLDMPipeline.__call__.example">

Examples:
```py
>>> from diffusers import AudioLDMPipeline
>>> import torch
>>> import scipy

>>> repo_id = "cvssp/audioldm-s-full-v2"
>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]

>>> # save the audio sample as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
```

</ExampleCodeBlock>







</div></div>

## AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioPipelineOutput</name><anchor>diffusers.AudioPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L132</source><parameters>[{"name": "audios", "val": ": ndarray"}]</parameters><paramsdesc>- **audios** (`np.ndarray`) --
  List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for audio pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/audioldm.md" />

### DiffEdit
https://huggingface.co/docs/diffusers/main/api/pipelines/diffedit.md

# DiffEdit

[DiffEdit: Diffusion-based semantic image editing with mask guidance](https://huggingface.co/papers/2210.11427) is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.

The abstract from the paper is:

*Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.*

The original codebase can be found at [Xiang-cd/DiffEdit-stable-diffusion](https://github.com/Xiang-cd/DiffEdit-stable-diffusion), and you can try it out in this [demo](https://blog.problemsolversguild.com/technical/research/2022/11/02/DiffEdit-Implementation.html).

This pipeline was contributed by [clarencechen](https://github.com/clarencechen). ❤️

## Tips

* The pipeline can generate masks that can be fed into other inpainting pipelines.
* In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to [generate_mask()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.generate_mask))
and a set of partially inverted latents (generated using [invert()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.invert)) _must_ be provided as arguments when calling the pipeline to generate the final edited image.
* The function [generate_mask()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.generate_mask) exposes two prompt arguments, `source_prompt` and `target_prompt`
that let you control the locations of the semantic edits in the final image to be generated. Let's say,
you wanted to translate from "cat" to "dog". In this case, the edit direction will be "cat -> dog". To reflect
this in the generated mask, you simply have to set the embeddings related to the phrases including "cat" to
`source_prompt` and "dog" to `target_prompt`.
* When generating partially inverted latents using `invert`, assign a caption or text embedding describing the
overall image to the `prompt` argument to help guide the inverse latent sampling process. In most cases, the
source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives.
* When calling the pipeline to generate the final edited image, assign the source concept to `negative_prompt`
and the target concept to `prompt`. Taking the above example, you simply have to set the embeddings related to
the phrases including "cat" to `negative_prompt` and "dog" to `prompt`.
* If you wanted to reverse the direction in the example above, i.e., "dog -> cat", then it's recommended to:
    * Swap the `source_prompt` and `target_prompt` in the arguments to `generate_mask`.
    * Change the input prompt in [invert()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.invert) to include "dog".
    * Swap the `prompt` and `negative_prompt` in the arguments to call the pipeline to generate the final edited image.
* The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the [DiffEdit](../../using-diffusers/diffedit) guide for more details.

## StableDiffusionDiffEditPipeline[[diffusers.StableDiffusionDiffEditPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionDiffEditPipeline</name><anchor>diffusers.StableDiffusionDiffEditPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_diffedit/pipeline_stable_diffusion_diffedit.py#L244</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "inverse_scheduler", "val": ": DDIMInverseScheduler"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **inverse_scheduler** ([DDIMInverseScheduler](/docs/diffusers/main/en/api/schedulers/ddim_inverse#diffusers.DDIMInverseScheduler)) --
  A scheduler to be used in combination with `unet` to fill in the unmasked part of the input latents.
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

> [!WARNING] > This is an experimental feature!

Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading and saving methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_mask</name><anchor>diffusers.StableDiffusionDiffEditPipeline.generate_mask</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_diffedit/pipeline_stable_diffusion_diffedit.py#L843</source><parameters>[{"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "target_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "target_negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "target_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "target_negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "source_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "source_negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "source_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "source_negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_maps_per_mask", "val": ": typing.Optional[int] = 10"}, {"name": "mask_encode_strength", "val": ": typing.Optional[float] = 0.5"}, {"name": "mask_thresholding_ratio", "val": ": typing.Optional[float] = 3.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`) --
  `Image` or tensor representing an image batch to be used for computing the mask.
- **target_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide semantic mask generation. If not defined, you need to pass
  `prompt_embeds`.
- **target_negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **target_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **target_negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **source_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to
  pass `source_prompt_embeds` or `source_image` instead.
- **source_negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you
  need to pass `source_negative_prompt_embeds` or `source_image` instead.
- **source_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text
  inputs (prompt weighting). If not provided, text embeddings are generated from `source_prompt` input
  argument.
- **source_negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily
  tweak text inputs (prompt weighting). If not provided, text embeddings are generated from
  `source_negative_prompt` input argument.
- **num_maps_per_mask** (`int`, *optional*, defaults to 10) --
  The number of noise maps sampled to generate the semantic mask using DiffEdit.
- **mask_encode_strength** (`float`, *optional*, defaults to 0.5) --
  The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0
  and 1.
- **mask_thresholding_ratio** (`float`, *optional*, defaults to 3.0) --
  The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before
  mask binarization.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the
  [AttnProcessor](/docs/diffusers/main/en/api/attnprocessor#diffusers.models.attention_processor.AttnProcessor) as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>`List[PIL.Image.Image]` or `np.array`</rettype><retdesc>When returning a `List[PIL.Image.Image]`, the list consists of a batch of single-channel binary images
with dimensions `(height // self.vae_scale_factor, width // self.vae_scale_factor)`. If it's
`np.array`, the shape is `(batch_size, height // self.vae_scale_factor, width //
self.vae_scale_factor)`.</retdesc></docstring>

Generate a latent mask given a mask prompt, a target prompt, and an image.



<ExampleCodeBlock anchor="diffusers.StableDiffusionDiffEditPipeline.generate_mask.example">

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipeline.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipeline.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipeline(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>invert</name><anchor>diffusers.StableDiffusionDiffEditPipeline.invert</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_diffedit/pipeline_stable_diffusion_diffedit.py#L1062</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "inpaint_strength", "val": ": float = 0.8"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decode_latents", "val": ": bool = False"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "lambda_auto_corr", "val": ": float = 20.0"}, {"name": "lambda_kl", "val": ": float = 20.0"}, {"name": "num_reg_steps", "val": ": int = 0"}, {"name": "num_auto_corr_rolls", "val": ": int = 5"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`PIL.Image.Image`) --
  `Image` or tensor representing an image batch to produce the inverted latents guided by `prompt`.
- **inpaint_strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When
  `inpaint_strength` is 1, the inversion process is run for the full number of iterations specified in
  `num_inference_steps`. `image` is used as a reference for the inversion process, and adding more noise
  increases `inpaint_strength`. If `inpaint_strength` is 0, no inpainting occurs.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **decode_latents** (`bool`, *optional*, defaults to `False`) --
  Whether or not to decode the inverted latents into a generated image. Setting this argument to `True`
  decodes all inverted latents for each timestep into a list of generated images.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.DiffEditInversionPipelineOutput` instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the
  [AttnProcessor](/docs/diffusers/main/en/api/attnprocessor#diffusers.models.attention_processor.AttnProcessor) as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **lambda_auto_corr** (`float`, *optional*, defaults to 20.0) --
  Lambda parameter to control auto correction.
- **lambda_kl** (`float`, *optional*, defaults to 20.0) --
  Lambda parameter to control Kullback-Leibler divergence output.
- **num_reg_steps** (`int`, *optional*, defaults to 0) --
  Number of regularization loss steps.
- **num_auto_corr_rolls** (`int`, *optional*, defaults to 5) --
  Number of auto correction roll steps.</paramsdesc><paramgroups>0</paramgroups><retdesc>`~pipelines.stable_diffusion.pipeline_stable_diffusion_diffedit.DiffEditInversionPipelineOutput` or
`tuple`:
If `return_dict` is `True`,
`~pipelines.stable_diffusion.pipeline_stable_diffusion_diffedit.DiffEditInversionPipelineOutput` is
returned, otherwise a `tuple` is returned where the first element is the inverted latents tensors
ordered by increasing noise, and the second is the corresponding decoded images if `decode_latents` is
`True`, otherwise `None`.</retdesc></docstring>

Generate inverted latents given a prompt and image.



<ExampleCodeBlock anchor="diffusers.StableDiffusionDiffEditPipeline.invert.example">

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> prompt = "A bowl of fruits"

>>> inverted_latents = pipeline.invert(image=init_image, prompt=prompt).latents
```

</ExampleCodeBlock>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionDiffEditPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_diffedit/pipeline_stable_diffusion_diffedit.py#L1300</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "image_latents", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "inpaint_strength", "val": ": typing.Optional[float] = 0.8"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **mask_image** (`PIL.Image.Image`) --
  `Image` or tensor representing an image batch to mask the generated image. White pixels in the mask are
  repainted, while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, 1, H, W)`.
- **image_latents** (`PIL.Image.Image` or `torch.Tensor`) --
  Partially noised image latents from the inversion process to be used as inputs for image generation.
- **inpaint_strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to inpaint the masked area. Must be between 0 and 1. When `inpaint_strength` is 1, the
  denoising process is run on the masked area for the full number of iterations specified in
  `num_inference_steps`. `image_latents` is used as a reference for the masked area, and adding more
  noise to a region increases `inpaint_strength`. If `inpaint_strength` is 0, no inpainting occurs.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionDiffEditPipeline.__call__.example">

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionDiffEditPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"

>>> init_image = download_image(img_url).resize((768, 768))

>>> pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
... )

>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.enable_model_cpu_offload()

>>> mask_prompt = "A bowl of fruits"
>>> prompt = "A bowl of pears"

>>> mask_image = pipeline.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt)
>>> image_latents = pipeline.invert(image=init_image, prompt=mask_prompt).latents
>>> image = pipeline(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionDiffEditPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_diffedit/pipeline_stable_diffusion_diffedit.py#L422</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/diffedit.md" />

### ControlNet with Hunyuan-DiT
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_hunyuandit.md

# ControlNet with Hunyuan-DiT

HunyuanDiTControlNetPipeline is an implementation of ControlNet for [Hunyuan-DiT](https://huggingface.co/papers/2405.08748).

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Hunyuan-DiT generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This code is implemented by Tencent Hunyuan Team. You can find pre-trained checkpoints for Hunyuan-DiT ControlNets on [Tencent Hunyuan](https://huggingface.co/Tencent-Hunyuan).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## HunyuanDiTControlNetPipeline[[diffusers.HunyuanDiTControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanDiTControlNetPipeline</name><anchor>diffusers.HunyuanDiTControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_hunyuandit/pipeline_hunyuandit_controlnet.py#L165</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": BertModel"}, {"name": "tokenizer", "val": ": BertTokenizer"}, {"name": "transformer", "val": ": HunyuanDiT2DModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_hunyuan.HunyuanDiT2DControlNetModel, typing.List[diffusers.models.controlnets.controlnet_hunyuan.HunyuanDiT2DControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_hunyuan.HunyuanDiT2DControlNetModel], diffusers.models.controlnets.controlnet_hunyuan.HunyuanDiT2DMultiControlNetModel]"}, {"name": "text_encoder_2", "val": ": typing.Optional[transformers.models.t5.modeling_t5.T5EncoderModel] = None"}, {"name": "tokenizer_2", "val": ": typing.Optional[transformers.models.mt5.tokenization_mt5.MT5Tokenizer] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. We use
  `sdxl-vae-fp16-fix`.
- **text_encoder** (Optional[`~transformers.BertModel`, `~transformers.CLIPTextModel`]) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  HunyuanDiT uses a fine-tuned [bilingual CLIP].
- **tokenizer** (Optional[`~transformers.BertTokenizer`, `~transformers.CLIPTokenizer`]) --
  A `BertTokenizer` or `CLIPTokenizer` to tokenize text.
- **transformer** ([HunyuanDiT2DModel](/docs/diffusers/main/en/api/models/hunyuan_transformer2d#diffusers.HunyuanDiT2DModel)) --
  The HunyuanDiT model designed by Tencent Hunyuan.
- **text_encoder_2** (`T5EncoderModel`) --
  The mT5 embedder. Specifically, it is 't5-v1_1-xxl'.
- **tokenizer_2** (`MT5Tokenizer`) --
  The tokenizer for the mT5 embedder.
- **scheduler** ([DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)) --
  A scheduler to be used in combination with HunyuanDiT to denoise the encoded image latents.
- **controlnet** ([HunyuanDiT2DControlNetModel](/docs/diffusers/main/en/api/models/controlnet_hunyuandit#diffusers.HunyuanDiT2DControlNetModel) or `List[HunyuanDiT2DControlNetModel]` or [HunyuanDiT2DControlNetModel](/docs/diffusers/main/en/api/models/controlnet_hunyuandit#diffusers.HunyuanDiT2DControlNetModel)) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for English/Chinese-to-image generation using HunyuanDiT.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

HunyuanDiT uses two text encoders: [mT5](https://huggingface.co/google/mt5-base) and [bilingual CLIP](fine-tuned by
ourselves)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HunyuanDiTControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_hunyuandit/pipeline_hunyuandit_controlnet.py#L634</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 5.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = (1024, 1024)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "use_resolution_binning", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`) --
  The height in pixels of the generated image.
- **width** (`int`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds_2` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **negative_prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds_2` is passed directly.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback_on_step_end** (`Callable[[int, int, Dict], None]`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A callback function or a list of callback functions to be called at the end of each denoising step.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  A list of tensor inputs that should be passed to the callback function. If not defined, all tensor
  inputs will be passed.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Rescale the noise_cfg according to `guidance_rescale`. Based on findings of [Common Diffusion Noise
  Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). See Section 3.4
- **original_size** (`Tuple[int, int]`, *optional*, defaults to `(1024, 1024)`) --
  The original size of the image. Used to calculate the time ids.
- **target_size** (`Tuple[int, int]`, *optional*) --
  The target size of the image. Used to calculate the time ids.
- **crops_coords_top_left** (`Tuple[int, int]`, *optional*, defaults to `(0, 0)`) --
  The top left coordinates of the crop. Used to calculate the time ids.
- **use_resolution_binning** (`bool`, *optional*, defaults to `True`) --
  Whether to use resolution binning or not. If `True`, the input resolution will be mapped to the closest
  standard resolution. Supported resolutions are 1024x1024, 1280x1280, 1024x768, 1152x864, 1280x960,
  768x1024, 864x1152, 960x1280, 1280x768, and 768x1280. It is recommended to set this to `True`.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation with HunyuanDiT.



<ExampleCodeBlock anchor="diffusers.HunyuanDiTControlNetPipeline.__call__.example">

Examples:
```py
from diffusers import HunyuanDiT2DControlNetModel, HunyuanDiTControlNetPipeline
import torch

controlnet = HunyuanDiT2DControlNetModel.from_pretrained(
    "Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Canny", torch_dtype=torch.float16
)

pipe = HunyuanDiTControlNetPipeline.from_pretrained(
    "Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.to("cuda")

from diffusers.utils import load_image

cond_image = load_image(
    "https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Canny/resolve/main/canny.jpg?download=true"
)

## You may also use English prompt as HunyuanDiT supports both English and Chinese
prompt = "在夜晚的酒店门前，一座古老的中国风格的狮子雕像矗立着，它的眼睛闪烁着光芒，仿佛在守护着这座建筑。背景是夜晚的酒店前，构图方式是特写，平视，居中构图。这张照片呈现了真实摄影风格，蕴含了中国雕塑文化，同时展现了神秘氛围"
# prompt="At night, an ancient Chinese-style lion statue stands in front of the hotel, its eyes gleaming as if guarding the building. The background is the hotel entrance at night, with a close-up, eye-level, and centered composition. This photo presents a realistic photographic style, embodies Chinese sculpture culture, and reveals a mysterious atmosphere."
image = pipe(
    prompt,
    height=1024,
    width=1024,
    control_image=cond_image,
    num_inference_steps=50,
).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.HunyuanDiTControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_hunyuandit/pipeline_hunyuandit_controlnet.py#L278</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "device", "val": ": device = None"}, {"name": "dtype", "val": ": dtype = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": typing.Optional[int] = None"}, {"name": "text_encoder_index", "val": ": int = 0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **dtype** (`torch.dtype`) --
  torch dtype
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **max_sequence_length** (`int`, *optional*) -- maximum sequence length to use for the prompt.
- **text_encoder_index** (`int`, *optional*) --
  Index of the text encoder to use. `0` for clip and `1` for T5.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_hunyuandit.md" />

### LEDITS++
https://huggingface.co/docs/diffusers/main/api/pipelines/ledits_pp.md

# LEDITS++

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

LEDITS++ was proposed in [LEDITS++: Limitless Image Editing using Text-to-Image Models](https://huggingface.co/papers/2311.16711) by Manuel Brack, Felix Friedrich, Katharina Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, Apolinário Passos.

The abstract from the paper is:

*Text-to-image diffusion models have recently received increasing interest for their astonishing ability to produce high-fidelity images from solely text inputs. Subsequent research efforts aim to exploit and apply their capabilities to real image editing. However, existing image-to-image methods are often inefficient, imprecise, and of limited versatility. They either require time-consuming fine-tuning, deviate unnecessarily strongly from the input image, and/or lack support for multiple, simultaneous edits. To address these issues, we introduce LEDITS++, an efficient yet versatile and precise textual image manipulation technique. LEDITS++'s novel inversion approach requires no tuning nor optimization and produces high-fidelity results with a few diffusion steps. Second, our methodology supports multiple simultaneous edits and is architecture-agnostic. Third, we use a novel implicit masking technique that limits changes to relevant image regions. We propose the novel TEdBench++ benchmark as part of our exhaustive evaluation. Our results demonstrate the capabilities of LEDITS++ and its improvements over previous methods. The project page is available at https://leditsplusplus-project.static.hf.space .*

> [!TIP]
> You can find additional information about LEDITS++ on the [project page](https://leditsplusplus-project.static.hf.space/index.html) and try it out in a [demo](https://huggingface.co/spaces/editing-images/leditsplusplus).

> [!WARNING]
> Due to some backward compatibility issues with the current diffusers implementation of [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) this implementation of LEdits++ can no longer guarantee perfect inversion.
> This issue is unlikely to have any noticeable effects on applied use-cases. However, we provide an alternative implementation that guarantees perfect inversion in a dedicated [GitHub repo](https://github.com/ml-research/ledits_pp).

We provide two distinct pipelines based on different pre-trained models.

## LEditsPPPipelineStableDiffusion[[diffusers.LEditsPPPipelineStableDiffusion]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LEditsPPPipelineStableDiffusion</name><anchor>diffusers.LEditsPPPipelineStableDiffusion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L269</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
  [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler). If any other scheduler is passed it will
  automatically be set to [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  Model that extracts features from generated images to be used as inputs for the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for textual image editing using LEDits++ with Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) and builds on the [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline). Check the superclass
documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular
device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L773</source><parameters>[{"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "editing_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "editing_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "reverse_editing_direction", "val": ": typing.Union[bool, typing.List[bool], NoneType] = False"}, {"name": "edit_guidance_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = 5"}, {"name": "edit_warmup_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = 0"}, {"name": "edit_cooldown_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "edit_threshold", "val": ": typing.Union[float, typing.List[float], NoneType] = 0.9"}, {"name": "user_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "sem_guidance", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "use_cross_attn_mask", "val": ": bool = False"}, {"name": "use_intersect_mask", "val": ": bool = True"}, {"name": "attn_store_steps", "val": ": typing.Optional[typing.List[int]] = []"}, {"name": "store_averaged_over_steps", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [LEditsPPDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPDiffusionPipelineOutput) instead of a plain
  tuple.
- **editing_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. The image is reconstructed by setting
  `editing_prompt = None`. Guidance direction of prompt should be specified via
  `reverse_editing_direction`.
- **editing_prompt_embeds** (`torch.Tensor>`, *optional*) --
  Pre-computed embeddings to use for guiding the image generation. Guidance direction of embedding should
  be specified via `reverse_editing_direction`.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **reverse_editing_direction** (`bool` or `List[bool]`, *optional*, defaults to `False`) --
  Whether the corresponding prompt in `editing_prompt` should be increased or decreased.
- **edit_guidance_scale** (`float` or `List[float]`, *optional*, defaults to 5) --
  Guidance scale for guiding the image generation. If provided as list values should correspond to
  `editing_prompt`. `edit_guidance_scale` is defined as `s_e` of equation 12 of [LEDITS++
  Paper](https://huggingface.co/papers/2301.12247).
- **edit_warmup_steps** (`float` or `List[float]`, *optional*, defaults to 10) --
  Number of diffusion steps (for each prompt) for which guidance will not be applied.
- **edit_cooldown_steps** (`float` or `List[float]`, *optional*, defaults to `None`) --
  Number of diffusion steps (for each prompt) after which guidance will no longer be applied.
- **edit_threshold** (`float` or `List[float]`, *optional*, defaults to 0.9) --
  Masking threshold of guidance. Threshold should be proportional to the image region that is modified.
  'edit_threshold' is defined as 'λ' of equation 12 of [LEDITS++
  Paper](https://huggingface.co/papers/2301.12247).
- **user_mask** (`torch.Tensor`, *optional*) --
  User-provided mask for even better control over the editing process. This is helpful when LEDITS++'s
  implicit masks do not meet user preferences.
- **sem_guidance** (`List[torch.Tensor]`, *optional*) --
  List of pre-generated guidance vectors to be applied at generation. Length of the list has to
  correspond to `num_inference_steps`.
- **use_cross_attn_mask** (`bool`, defaults to `False`) --
  Whether cross-attention masks are used. Cross-attention masks are always used when use_intersect_mask
  is set to true. Cross-attention masks are defined as 'M^1' of equation 12 of [LEDITS++
  paper](https://huggingface.co/papers/2311.16711).
- **use_intersect_mask** (`bool`, defaults to `True`) --
  Whether the masking term is calculated as intersection of cross-attention masks and masks derived from
  the noise estimate. Cross-attention mask are defined as 'M^1' and masks derived from the noise estimate
  are defined as 'M^2' of equation 12 of [LEDITS++ paper](https://huggingface.co/papers/2311.16711).
- **attn_store_steps** (`List[int]`, *optional*) --
  Steps for which the attention maps are stored in the AttentionStore. Just for visualization purposes.
- **store_averaged_over_steps** (`bool`, defaults to `True`) --
  Whether the attention maps for the 'attn_store_steps' are stored averaged over the diffusion steps. If
  False, attention maps for each step are stores separately. Just for visualization purposes.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[LEditsPPDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPDiffusionPipelineOutput) or `tuple`</rettype><retdesc>[LEditsPPDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPDiffusionPipelineOutput) if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images, and the second element is a list
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
content, according to the `safety_checker`.</retdesc></docstring>

The call function to the pipeline for editing. The
[invert()](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.LEditsPPPipelineStableDiffusion.invert) method has to be called beforehand. Edits will
always be performed for the last inverted image(s).



<ExampleCodeBlock anchor="diffusers.LEditsPPPipelineStableDiffusion.__call__.example">

Examples:
```py
>>> import torch

>>> from diffusers import LEditsPPPipelineStableDiffusion
>>> from diffusers.utils import load_image

>>> pipe = LEditsPPPipelineStableDiffusion.from_pretrained(
...     "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_vae_tiling()
>>> pipe = pipe.to("cuda")

>>> img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/cherry_blossom.png"
>>> image = load_image(img_url).resize((512, 512))

>>> _ = pipe.invert(image=image, num_inversion_steps=50, skip=0.1)

>>> edited_image = pipe(
...     editing_prompt=["cherry blossom"], edit_guidance_scale=10.0, edit_threshold=0.75
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>invert</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.invert</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L1277</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "source_prompt", "val": ": str = ''"}, {"name": "source_guidance_scale", "val": ": float = 3.5"}, {"name": "num_inversion_steps", "val": ": int = 30"}, {"name": "skip", "val": ": float = 0.15"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "resize_mode", "val": ": typing.Optional[str] = 'default'"}, {"name": "crops_coords", "val": ": typing.Optional[typing.Tuple[int, int, int, int]] = None"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  Input for the image(s) that are to be edited. Multiple input images have to default to the same aspect
  ratio.
- **source_prompt** (`str`, defaults to `""`) --
  Prompt describing the input image that will be used for guidance during inversion. Guidance is disabled
  if the `source_prompt` is `""`.
- **source_guidance_scale** (`float`, defaults to `3.5`) --
  Strength of guidance during inversion.
- **num_inversion_steps** (`int`, defaults to `30`) --
  Number of total performed inversion steps after discarding the initial `skip` steps.
- **skip** (`float`, defaults to `0.15`) --
  Portion of initial steps that will be ignored for inversion and subsequent generation. Lower values
  will lead to stronger changes to the input image. `skip` has to be between `0` and `1`.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make inversion
  deterministic.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **height** (`int`, *optional*, defaults to `None`) --
  The height in preprocessed image. If `None`, will use the `get_default_height_width()` to get default
  height.
- **width** (`int`, *optional*`, defaults to `None`) --
  The width in preprocessed. If `None`, will use get_default_height_width()` to get the default width.
- **resize_mode** (`str`, *optional*, defaults to `default`) --
  The resize mode, can be one of `default` or `fill`. If `default`, will resize the image to fit within
  the specified width and height, and it may not maintaining the original aspect ratio. If `fill`, will
  resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
  center the image within the dimensions, filling empty with data from image. If `crop`, will resize the
  image to fit within the specified width and height, maintaining the aspect ratio, and then center the
  image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
  supported for PIL image input.
- **crops_coords** (`List[Tuple[int, int, int, int]]`, *optional*, defaults to `None`) --
  The crop coordinates for each image in the batch. If `None`, will not crop the image.</paramsdesc><paramgroups>0</paramgroups><rettype>[LEditsPPInversionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPInversionPipelineOutput)</rettype><retdesc>Output will contain the resized input image(s)
and respective VAE reconstruction(s).</retdesc></docstring>

The function to the pipeline for image inversion as described by the [LEDITS++
Paper](https://huggingface.co/papers/2301.12247). If the scheduler is set to [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler) the
inversion proposed by [edit-friendly DPDM](https://huggingface.co/papers/2304.06140) will be performed instead.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L733</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L760</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L720</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L746</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LEditsPPPipelineStableDiffusion.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion.py#L521</source><parameters>[{"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "enable_edit_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "editing_prompt", "val": " = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "editing_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **enable_edit_guidance** (`bool`) --
  whether to perform any editing or reconstruct the input image instead
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **editing_prompt** (`str` or `List[str]`, *optional*) --
  Editing prompt(s) to be encoded. If not defined, one has to pass `editing_prompt_embeds` instead.
- **editing_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## LEditsPPPipelineStableDiffusionXL[[diffusers.LEditsPPPipelineStableDiffusionXL]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LEditsPPPipelineStableDiffusionXL</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L275</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler]"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
  [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler). If any other scheduler is passed it will
  automatically be set to [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for textual image editing using LEDits++ with Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) and builds on the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline). Check the
superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a
particular device, etc.).

In addition the pipeline inherits the following loading methods:
- *LoRA*: [LEditsPPPipelineStableDiffusionXL.load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights)
- *Ckpt*: [loaders.FromSingleFileMixin.from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file)

as well as the following saving methods:
- *LoRA*: `loaders.StableDiffusionXLPipeline.save_lora_weights`





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L847</source><parameters>[{"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "editing_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "editing_prompt_embeddings", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "editing_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "reverse_editing_direction", "val": ": typing.Union[bool, typing.List[bool], NoneType] = False"}, {"name": "edit_guidance_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = 5"}, {"name": "edit_warmup_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = 0"}, {"name": "edit_cooldown_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "edit_threshold", "val": ": typing.Union[float, typing.List[float], NoneType] = 0.9"}, {"name": "sem_guidance", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "use_cross_attn_mask", "val": ": bool = False"}, {"name": "use_intersect_mask", "val": ": bool = False"}, {"name": "user_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attn_store_steps", "val": ": typing.Optional[typing.List[int]] = []"}, {"name": "store_averaged_over_steps", "val": ": bool = True"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.7) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **editing_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. The image is reconstructed by setting
  `editing_prompt = None`. Guidance direction of prompt should be specified via
  `reverse_editing_direction`.
- **editing_prompt_embeddings** (`torch.Tensor`, *optional*) --
  Pre-generated edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, editing_prompt_embeddings will be generated from `editing_prompt` input argument.
- **editing_pooled_prompt_embeddings** (`torch.Tensor`, *optional*) --
  Pre-generated pooled edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, editing_prompt_embeddings will be generated from `editing_prompt` input
  argument.
- **reverse_editing_direction** (`bool` or `List[bool]`, *optional*, defaults to `False`) --
  Whether the corresponding prompt in `editing_prompt` should be increased or decreased.
- **edit_guidance_scale** (`float` or `List[float]`, *optional*, defaults to 5) --
  Guidance scale for guiding the image generation. If provided as list values should correspond to
  `editing_prompt`. `edit_guidance_scale` is defined as `s_e` of equation 12 of [LEDITS++
  Paper](https://huggingface.co/papers/2301.12247).
- **edit_warmup_steps** (`float` or `List[float]`, *optional*, defaults to 10) --
  Number of diffusion steps (for each prompt) for which guidance is not applied.
- **edit_cooldown_steps** (`float` or `List[float]`, *optional*, defaults to `None`) --
  Number of diffusion steps (for each prompt) after which guidance is no longer applied.
- **edit_threshold** (`float` or `List[float]`, *optional*, defaults to 0.9) --
  Masking threshold of guidance. Threshold should be proportional to the image region that is modified.
  'edit_threshold' is defined as 'λ' of equation 12 of [LEDITS++
  Paper](https://huggingface.co/papers/2301.12247).
- **sem_guidance** (`List[torch.Tensor]`, *optional*) --
  List of pre-generated guidance vectors to be applied at generation. Length of the list has to
  correspond to `num_inference_steps`.
- **use_cross_attn_mask** --
  Whether cross-attention masks are used. Cross-attention masks are always used when use_intersect_mask
  is set to true. Cross-attention masks are defined as 'M^1' of equation 12 of [LEDITS++
  paper](https://huggingface.co/papers/2311.16711).
- **use_intersect_mask** --
  Whether the masking term is calculated as intersection of cross-attention masks and masks derived from
  the noise estimate. Cross-attention mask are defined as 'M^1' and masks derived from the noise estimate
  are defined as 'M^2' of equation 12 of [LEDITS++ paper](https://huggingface.co/papers/2311.16711).
- **user_mask** --
  User-provided mask for even better control over the editing process. This is helpful when LEDITS++'s
  implicit masks do not meet user preferences.
- **attn_store_steps** --
  Steps for which the attention maps are stored in the AttentionStore. Just for visualization purposes.
- **store_averaged_over_steps** --
  Whether the attention maps for the 'attn_store_steps' are stored averaged over the diffusion steps. If
  False, attention maps for each step are stores separately. Just for visualization purposes.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[LEditsPPDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPDiffusionPipelineOutput) or `tuple`</rettype><retdesc>[LEditsPPDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPDiffusionPipelineOutput) if `return_dict` is True, otherwise a `tuple. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for editing. The
[invert()](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.LEditsPPPipelineStableDiffusionXL.invert) method has to be called beforehand. Edits
will always be performed for the last inverted image(s).



<ExampleCodeBlock anchor="diffusers.LEditsPPPipelineStableDiffusionXL.__call__.example">

Examples:
```py
>>> import torch

>>> from diffusers import LEditsPPPipelineStableDiffusionXL
>>> from diffusers.utils import load_image

>>> pipe = LEditsPPPipelineStableDiffusionXL.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_vae_tiling()
>>> pipe = pipe.to("cuda")

>>> img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/tennis.jpg"
>>> image = load_image(img_url).resize((1024, 1024))

>>> _ = pipe.invert(image=image, num_inversion_steps=50, skip=0.2)

>>> edited_image = pipe(
...     editing_prompt=["tennis ball", "tomato"],
...     reverse_editing_direction=[True, False],
...     edit_guidance_scale=[5.0, 10.0],
...     edit_threshold=[0.9, 0.85],
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>invert</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.invert</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L1486</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "source_prompt", "val": ": str = ''"}, {"name": "source_guidance_scale", "val": " = 3.5"}, {"name": "negative_prompt", "val": ": str = None"}, {"name": "negative_prompt_2", "val": ": str = None"}, {"name": "num_inversion_steps", "val": ": int = 50"}, {"name": "skip", "val": ": float = 0.15"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "num_zero_noise_steps", "val": ": int = 3"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "resize_mode", "val": ": typing.Optional[str] = 'default'"}, {"name": "crops_coords", "val": ": typing.Optional[typing.Tuple[int, int, int, int]] = None"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  Input for the image(s) that are to be edited. Multiple input images have to default to the same aspect
  ratio.
- **source_prompt** (`str`, defaults to `""`) --
  Prompt describing the input image that will be used for guidance during inversion. Guidance is disabled
  if the `source_prompt` is `""`.
- **source_guidance_scale** (`float`, defaults to `3.5`) --
  Strength of guidance during inversion.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_inversion_steps** (`int`, defaults to `50`) --
  Number of total performed inversion steps after discarding the initial `skip` steps.
- **skip** (`float`, defaults to `0.15`) --
  Portion of initial steps that will be ignored for inversion and subsequent generation. Lower values
  will lead to stronger changes to the input image. `skip` has to be between `0` and `1`.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make inversion
  deterministic.
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **num_zero_noise_steps** (`int`, defaults to `3`) --
  Number of final diffusion steps that will not renoise the current image. If no steps are set to zero
  SD-XL in combination with [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) will produce noise artifacts.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>[LEditsPPInversionPipelineOutput](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.pipelines.LEditsPPInversionPipelineOutput)</rettype><retdesc>Output will contain the resized input image(s)
and respective VAE reconstruction(s).</retdesc></docstring>

The function to the pipeline for image inversion as described by the [LEDITS++
Paper](https://huggingface.co/papers/2301.12247). If the scheduler is set to [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler) the
inversion proposed by [edit-friendly DPDM](https://huggingface.co/papers/2304.06140) will be performed instead.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L782</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L809</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L769</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L795</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L402</source><parameters>[{"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "enable_edit_guidance", "val": ": bool = True"}, {"name": "editing_prompt", "val": ": typing.Optional[str] = None"}, {"name": "editing_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "editing_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead.
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **enable_edit_guidance** (`bool`) --
  Whether to guide towards an editing prompt or not.
- **editing_prompt** (`str` or `List[str]`, *optional*) --
  Editing prompt(s) to be encoded. If not defined and 'enable_edit_guidance' is True, one has to pass
  `editing_prompt_embeds` instead.
- **editing_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated edit text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided and 'enable_edit_guidance' is True, editing_prompt_embeds will be generated from
  `editing_prompt` input argument.
- **editing_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated edit pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled editing_pooled_prompt_embeds will be generated from `editing_prompt`
  input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.LEditsPPPipelineStableDiffusionXL.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py#L708</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## LEditsPPDiffusionPipelineOutput[[diffusers.pipelines.LEditsPPDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.LEditsPPDiffusionPipelineOutput</name><anchor>diffusers.pipelines.LEditsPPDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for LEdits++ Diffusion pipelines.




</div>

## LEditsPPInversionPipelineOutput[[diffusers.pipelines.LEditsPPInversionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.LEditsPPInversionPipelineOutput</name><anchor>diffusers.pipelines.LEditsPPInversionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ledits_pp/pipeline_output.py#L29</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "vae_reconstruction_images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **input_images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of the cropped and resized input images as PIL images of length `batch_size` or NumPy array of shape `
  (batch_size, height, width, num_channels)`.
- **vae_reconstruction_images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of VAE reconstruction of all input images as PIL images of length `batch_size` or NumPy array of shape
  ` (batch_size, height, width, num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for LEdits++ Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/ledits_pp.md" />

### Cosmos
https://huggingface.co/docs/diffusers/main/api/pipelines/cosmos.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# Cosmos

[Cosmos World Foundation Model Platform for Physical AI](https://huggingface.co/papers/2501.03575) by NVIDIA.

*Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make our platform open-source and our models open-weight with permissive licenses available via https://github.com/NVIDIA/Cosmos.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Loading original format checkpoints

Original format checkpoints that have not been converted to diffusers-expected format can be loaded using the `from_single_file` method.

```python
import torch
from diffusers import Cosmos2TextToImagePipeline, CosmosTransformer3DModel

model_id = "nvidia/Cosmos-Predict2-2B-Text2Image"
transformer = CosmosTransformer3DModel.from_single_file(
    "https://huggingface.co/nvidia/Cosmos-Predict2-2B-Text2Image/blob/main/model.pt",
    torch_dtype=torch.bfloat16,
).to("cuda")
pipe = Cosmos2TextToImagePipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.bfloat16)
pipe.to("cuda")

prompt = "A close-up shot captures a vibrant yellow scrubber vigorously working on a grimy plate, its bristles moving in circular motions to lift stubborn grease and food residue. The dish, once covered in remnants of a hearty meal, gradually reveals its original glossy surface. Suds form and bubble around the scrubber, creating a satisfying visual of cleanliness in progress. The sound of scrubbing fills the air, accompanied by the gentle clinking of the dish against the sink. As the scrubber continues its task, the dish transforms, gleaming under the bright kitchen lights, symbolizing the triumph of cleanliness over mess."
negative_prompt = "The video captures a series of frames showing ugly scenes, static with no motion, motion blur, over-saturation, shaky footage, low resolution, grainy texture, pixelated images, poorly lit areas, underexposed and overexposed scenes, poor color balance, washed out colors, choppy sequences, jerky movements, low frame rate, artifacting, color banding, unnatural transitions, outdated special effects, fake elements, unconvincing visuals, poorly edited content, jump cuts, visual noise, and flickering. Overall, the video is of poor quality."

output = pipe(
    prompt=prompt, negative_prompt=negative_prompt, generator=torch.Generator().manual_seed(1)
).images[0]
output.save("output.png")
```

## CosmosTextToWorldPipeline[[diffusers.CosmosTextToWorldPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CosmosTextToWorldPipeline</name><anchor>diffusers.CosmosTextToWorldPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_text2world.py#L132</source><parameters>[{"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": CosmosTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLCosmos"}, {"name": "scheduler", "val": ": EDMEulerScheduler"}, {"name": "safety_checker", "val": ": CosmosSafetyChecker = None"}]</parameters><paramsdesc>- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Cosmos uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-11b](https://huggingface.co/google-t5/t5-11b) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CosmosTransformer3DModel](/docs/diffusers/main/en/api/models/cosmos_transformer3d#diffusers.CosmosTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLCosmos](/docs/diffusers/main/en/api/models/autoencoderkl_cosmos#diffusers.AutoencoderKLCosmos)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-world generation using [Cosmos Predict1](https://github.com/nvidia-cosmos/cosmos-predict1).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CosmosTextToWorldPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_text2world.py#L393</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 704"}, {"name": "width", "val": ": int = 1280"}, {"name": "num_frames", "val": ": int = 121"}, {"name": "num_inference_steps", "val": ": int = 36"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "fps", "val": ": int = 30"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `720`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1280`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `121`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `36`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `7.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`.
- **fps** (`int`, defaults to `30`) --
  The frames per second of the generated video.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `CosmosPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~CosmosPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `CosmosPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CosmosTextToWorldPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CosmosTextToWorldPipeline
>>> from diffusers.utils import export_to_video

>>> model_id = "nvidia/Cosmos-1.0-Diffusion-7B-Text2World"
>>> pipe = CosmosTextToWorldPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of field that keeps the focus on the robot while subtly blurring the background for a cinematic effect."

>>> output = pipe(prompt=prompt).frames[0]
>>> export_to_video(output, "output.mp4", fps=30)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CosmosTextToWorldPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_text2world.py#L231</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## CosmosVideoToWorldPipeline[[diffusers.CosmosVideoToWorldPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CosmosVideoToWorldPipeline</name><anchor>diffusers.CosmosVideoToWorldPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_video2world.py#L175</source><parameters>[{"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": CosmosTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLCosmos"}, {"name": "scheduler", "val": ": EDMEulerScheduler"}, {"name": "safety_checker", "val": ": CosmosSafetyChecker = None"}]</parameters><paramsdesc>- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Cosmos uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-11b](https://huggingface.co/google-t5/t5-11b) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CosmosTransformer3DModel](/docs/diffusers/main/en/api/models/cosmos_transformer3d#diffusers.CosmosTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLCosmos](/docs/diffusers/main/en/api/models/autoencoderkl_cosmos#diffusers.AutoencoderKLCosmos)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-world and video-to-world generation using [Cosmos
Predict-1](https://github.com/nvidia-cosmos/cosmos-predict1).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CosmosVideoToWorldPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_video2world.py#L505</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "video", "val": ": typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 704"}, {"name": "width", "val": ": int = 1280"}, {"name": "num_frames", "val": ": int = 121"}, {"name": "num_inference_steps", "val": ": int = 36"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "input_frames_guidance", "val": ": bool = False"}, {"name": "augment_sigma", "val": ": float = 0.001"}, {"name": "fps", "val": ": int = 30"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `720`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1280`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `121`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `36`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `7.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`.
- **fps** (`int`, defaults to `30`) --
  The frames per second of the generated video.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `CosmosPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~CosmosPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `CosmosPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



Examples:
<ExampleCodeBlock anchor="diffusers.CosmosVideoToWorldPipeline.__call__.example">

Image conditioning:

```python
>>> import torch
>>> from diffusers import CosmosVideoToWorldPipeline
>>> from diffusers.utils import export_to_video, load_image

>>> model_id = "nvidia/Cosmos-1.0-Diffusion-7B-Video2World"
>>> pipe = CosmosVideoToWorldPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "The video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day."
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input.jpg"
... )

>>> video = pipe(image=image, prompt=prompt).frames[0]
>>> export_to_video(video, "output.mp4", fps=30)
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="diffusers.CosmosVideoToWorldPipeline.__call__.example-2">

Video conditioning:

```python
>>> import torch
>>> from diffusers import CosmosVideoToWorldPipeline
>>> from diffusers.utils import export_to_video, load_video

>>> model_id = "nvidia/Cosmos-1.0-Diffusion-7B-Video2World"
>>> pipe = CosmosVideoToWorldPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
>>> pipe.transformer = torch.compile(pipe.transformer)
>>> pipe.to("cuda")

>>> prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
>>> video = load_video(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
... )[
...     :21
... ]  # This example uses only the first 21 frames

>>> video = pipe(video=video, prompt=prompt).frames[0]
>>> export_to_video(video, "output.mp4", fps=30)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CosmosVideoToWorldPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos_video2world.py#L277</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## Cosmos2TextToImagePipeline[[diffusers.Cosmos2TextToImagePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Cosmos2TextToImagePipeline</name><anchor>diffusers.Cosmos2TextToImagePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_text2image.py#L135</source><parameters>[{"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": CosmosTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "safety_checker", "val": ": CosmosSafetyChecker = None"}]</parameters><paramsdesc>- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Cosmos uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-11b](https://huggingface.co/google-t5/t5-11b) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CosmosTransformer3DModel](/docs/diffusers/main/en/api/models/cosmos_transformer3d#diffusers.CosmosTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using [Cosmos Predict2](https://github.com/nvidia-cosmos/cosmos-predict2).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.Cosmos2TextToImagePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_text2image.py#L409</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 768"}, {"name": "width", "val": ": int = 1360"}, {"name": "num_inference_steps", "val": ": int = 35"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `768`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1360`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, defaults to `35`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `7.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `CosmosImagePipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~CosmosImagePipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `CosmosImagePipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.Cosmos2TextToImagePipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import Cosmos2TextToImagePipeline

>>> # Available checkpoints: nvidia/Cosmos-Predict2-2B-Text2Image, nvidia/Cosmos-Predict2-14B-Text2Image
>>> model_id = "nvidia/Cosmos-Predict2-2B-Text2Image"
>>> pipe = Cosmos2TextToImagePipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A close-up shot captures a vibrant yellow scrubber vigorously working on a grimy plate, its bristles moving in circular motions to lift stubborn grease and food residue. The dish, once covered in remnants of a hearty meal, gradually reveals its original glossy surface. Suds form and bubble around the scrubber, creating a satisfying visual of cleanliness in progress. The sound of scrubbing fills the air, accompanied by the gentle clinking of the dish against the sink. As the scrubber continues its task, the dish transforms, gleaming under the bright kitchen lights, symbolizing the triumph of cleanliness over mess."
>>> negative_prompt = "The video captures a series of frames showing ugly scenes, static with no motion, motion blur, over-saturation, shaky footage, low resolution, grainy texture, pixelated images, poorly lit areas, underexposed and overexposed scenes, poor color balance, washed out colors, choppy sequences, jerky movements, low frame rate, artifacting, color banding, unnatural transitions, outdated special effects, fake elements, unconvincing visuals, poorly edited content, jump cuts, visual noise, and flickering. Overall, the video is of poor quality."

>>> output = pipe(
...     prompt=prompt, negative_prompt=negative_prompt, generator=torch.Generator().manual_seed(1)
... ).images[0]
>>> output.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.Cosmos2TextToImagePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_text2image.py#L246</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## Cosmos2VideoToWorldPipeline[[diffusers.Cosmos2VideoToWorldPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Cosmos2VideoToWorldPipeline</name><anchor>diffusers.Cosmos2VideoToWorldPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_video2world.py#L154</source><parameters>[{"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": CosmosTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "safety_checker", "val": ": CosmosSafetyChecker = None"}]</parameters><paramsdesc>- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Cosmos uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-11b](https://huggingface.co/google-t5/t5-11b) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CosmosTransformer3DModel](/docs/diffusers/main/en/api/models/cosmos_transformer3d#diffusers.CosmosTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for video-to-world generation using [Cosmos Predict2](https://github.com/nvidia-cosmos/cosmos-predict2).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.Cosmos2VideoToWorldPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_video2world.py#L477</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "video", "val": ": typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 704"}, {"name": "width", "val": ": int = 1280"}, {"name": "num_frames", "val": ": int = 93"}, {"name": "num_inference_steps", "val": ": int = 35"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "fps", "val": ": int = 16"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "sigma_conditioning", "val": ": float = 0.0001"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, *optional*) --
  The image to be used as a conditioning input for the video generation.
- **video** (`List[PIL.Image.Image]`, `np.ndarray`, `torch.Tensor`, *optional*) --
  The video to be used as a conditioning input for the video generation.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `704`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1280`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `93`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `35`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `7.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`.
- **fps** (`int`, defaults to `16`) --
  The frames per second of the generated video.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `CosmosPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `512`) --
  The maximum number of tokens in the prompt. If the prompt exceeds this length, it will be truncated. If
  the prompt is shorter than this length, it will be padded.
- **sigma_conditioning** (`float`, defaults to `0.0001`) --
  The sigma value used for scaling conditioning latents. Ideally, it should not be changed or should be
  set to a small value close to zero.</paramsdesc><paramgroups>0</paramgroups><rettype>`~CosmosPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `CosmosPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.Cosmos2VideoToWorldPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import Cosmos2VideoToWorldPipeline
>>> from diffusers.utils import export_to_video, load_image

>>> # Available checkpoints: nvidia/Cosmos-Predict2-2B-Video2World, nvidia/Cosmos-Predict2-14B-Video2World
>>> model_id = "nvidia/Cosmos-Predict2-2B-Video2World"
>>> pipe = Cosmos2VideoToWorldPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A close-up shot captures a vibrant yellow scrubber vigorously working on a grimy plate, its bristles moving in circular motions to lift stubborn grease and food residue. The dish, once covered in remnants of a hearty meal, gradually reveals its original glossy surface. Suds form and bubble around the scrubber, creating a satisfying visual of cleanliness in progress. The sound of scrubbing fills the air, accompanied by the gentle clinking of the dish against the sink. As the scrubber continues its task, the dish transforms, gleaming under the bright kitchen lights, symbolizing the triumph of cleanliness over mess."
>>> negative_prompt = "The video captures a series of frames showing ugly scenes, static with no motion, motion blur, over-saturation, shaky footage, low resolution, grainy texture, pixelated images, poorly lit areas, underexposed and overexposed scenes, poor color balance, washed out colors, choppy sequences, jerky movements, low frame rate, artifacting, color banding, unnatural transitions, outdated special effects, fake elements, unconvincing visuals, poorly edited content, jump cuts, visual noise, and flickering. Overall, the video is of poor quality."
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/yellow-scrubber.png"
... )

>>> video = pipe(
...     image=image, prompt=prompt, negative_prompt=negative_prompt, generator=torch.Generator().manual_seed(1)
... ).frames[0]
>>> export_to_video(video, "output.mp4", fps=16)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.Cosmos2VideoToWorldPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_cosmos2_video2world.py#L265</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## CosmosPipelineOutput[[diffusers.pipelines.cosmos.pipeline_output.CosmosPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.cosmos.pipeline_output.CosmosPipelineOutput</name><anchor>diffusers.pipelines.cosmos.pipeline_output.CosmosPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_output.py#L15</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Cosmos any-to-world/video pipelines.




</div>

## CosmosImagePipelineOutput[[diffusers.pipelines.cosmos.pipeline_output.CosmosImagePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.cosmos.pipeline_output.CosmosImagePipelineOutput</name><anchor>diffusers.pipelines.cosmos.pipeline_output.CosmosImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cosmos/pipeline_output.py#L30</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Cosmos any-to-image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/cosmos.md" />

### ControlNet
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sana.md

# ControlNet

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This pipeline was contributed by [ishan24](https://huggingface.co/ishan24). ❤️
The original codebase can be found at [NVlabs/Sana](https://github.com/NVlabs/Sana), and you can find official ControlNet checkpoints on [Efficient-Large-Model's](https://huggingface.co/Efficient-Large-Model) Hub profile.

## SanaControlNetPipeline[[diffusers.SanaControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaControlNetPipeline</name><anchor>diffusers.SanaControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L197</source><parameters>[{"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "vae", "val": ": AutoencoderDC"}, {"name": "transformer", "val": ": SanaTransformer2DModel"}, {"name": "controlnet", "val": ": SanaControlNetModel"}, {"name": "scheduler", "val": ": DPMSolverMultistepScheduler"}]</parameters></docstring>

Pipeline for text-to-image generation using [Sana](https://huggingface.co/papers/2410.10629).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SanaControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L776</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.List[str] = [\"Given a user prompt, generate an 'Enhanced prompt' that provides detailed visual descriptions suitable for image generation. Evaluate the level of detail in the user prompt:\", '- If the prompt is simple, focus on adding specifics about colors, shapes, sizes, textures, and spatial relationships to create vivid and concrete scenes.', '- If the prompt is already detailed, refine and enhance the existing details slightly without overcomplicating.', 'Here are examples of how to transform or refine prompts:', '- User Prompt: A cat sleeping -> Enhanced: A small, fluffy white cat curled up in a round shape, sleeping peacefully on a warm sunny windowsill, surrounded by pots of blooming red flowers.', '- User Prompt: A busy city street -> Enhanced: A bustling city street scene at dusk, featuring glowing street lamps, a diverse crowd of people in colorful clothing, and a double-decker bus passing by towering glass skyscrapers.', 'Please generate only the enhanced description for the prompt below and avoid including any additional commentary or evaluations:', 'User Prompt: ']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `300`) --
  Maximum sequence length to use with the `prompt`.
- **complex_human_instruction** (`List[str]`, *optional*) --
  Instructions for complex human attention:
  https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55.</paramsdesc><paramgroups>0</paramgroups><rettype>[SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SanaControlNetPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import SanaControlNetPipeline
>>> from diffusers.utils import load_image

>>> pipe = SanaControlNetPipeline.from_pretrained(
...     "ishan24/Sana_600M_1024px_ControlNetPlus_diffusers",
...     variant="fp16",
...     torch_dtype={"default": torch.bfloat16, "controlnet": torch.float16, "transformer": torch.float16},
...     device_map="balanced",
... )
>>> cond_image = load_image(
...     "https://huggingface.co/ishan24/Sana_600M_1024px_ControlNet_diffusers/resolve/main/hed_example.png"
... )
>>> prompt = 'a cat with a neon sign that says "Sana"'
>>> image = pipe(
...     prompt,
...     control_image=cond_image,
... ).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.SanaControlNetPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L249</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.SanaControlNetPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L276</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.SanaControlNetPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L236</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.SanaControlNetPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L262</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SanaControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana_controlnet.py#L349</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Sana, it's should be the embeddings of the "" string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.
- **complex_human_instruction** (`list[str]`, defaults to `complex_human_instruction`) --
  If `complex_human_instruction` is not empty, the function will use the complex Human instruction for
  the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SanaPipelineOutput[[diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</name><anchor>diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Sana pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_sana.md" />

### Shap-E
https://huggingface.co/docs/diffusers/main/api/pipelines/shap_e.md

# Shap-E

The Shap-E model was proposed in [Shap-E: Generating Conditional 3D Implicit Functions](https://huggingface.co/papers/2305.02463) by Alex Nichol and Heewoo Jun from [OpenAI](https://github.com/openai).

The abstract from the paper is:

*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space.*

The original codebase can be found at [openai/shap-e](https://github.com/openai/shap-e).

> [!TIP]
> See the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## ShapEPipeline[[diffusers.ShapEPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ShapEPipeline</name><anchor>diffusers.ShapEPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/shap_e/pipeline_shap_e.py#L88</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "scheduler", "val": ": HeunDiscreteScheduler"}, {"name": "shap_e_renderer", "val": ": ShapERenderer"}]</parameters><paramsdesc>- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **text_encoder** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Frozen text-encoder.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **scheduler** ([HeunDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/heun#diffusers.HeunDiscreteScheduler)) --
  A scheduler to be used in combination with the `prior` model to generate image embedding.
- **shap_e_renderer** (`ShapERenderer`) --
  Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF
  rendering method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ShapEPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/shap_e/pipeline_shap_e.py#L191</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "frame_size", "val": ": int = 64"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **frame_size** (`int`, *optional*, default to 64) --
  The width and height of each image frame of the generated 3D output.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`), `"latent"` (`torch.Tensor`), or mesh (`MeshDecoderOutput`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.ShapEPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from diffusers.utils import export_to_gif

>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

>>> repo = "openai/shap-e"
>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> guidance_scale = 15.0
>>> prompt = "a shark"

>>> images = pipe(
...     prompt,
...     guidance_scale=guidance_scale,
...     num_inference_steps=64,
...     frame_size=256,
... ).images

>>> gif_path = export_to_gif(images[0], "shark_3d.gif")
```

</ExampleCodeBlock>







</div></div>

## ShapEImg2ImgPipeline[[diffusers.ShapEImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ShapEImg2ImgPipeline</name><anchor>diffusers.ShapEImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py#L89</source><parameters>[{"name": "prior", "val": ": PriorTransformer"}, {"name": "image_encoder", "val": ": CLIPVisionModel"}, {"name": "image_processor", "val": ": CLIPImageProcessor"}, {"name": "scheduler", "val": ": HeunDiscreteScheduler"}, {"name": "shap_e_renderer", "val": ": ShapERenderer"}]</parameters><paramsdesc>- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **image_encoder** ([CLIPVisionModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModel)) --
  Frozen image-encoder.
- **image_processor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to process images.
- **scheduler** ([HeunDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/heun#diffusers.HeunDiscreteScheduler)) --
  A scheduler to be used in combination with the `prior` model to generate image embedding.
- **shap_e_renderer** (`ShapERenderer`) --
  Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF
  rendering method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ShapEImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py#L173</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]]"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "frame_size", "val": ": int = 64"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image` or tensor representing an image batch to be used as the starting point. Can also accept image
  latents as image, but if passing latents directly it is not encoded again.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **frame_size** (`int`, *optional*, default to 64) --
  The width and height of each image frame of the generated 3D output.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`), `"latent"` (`torch.Tensor`), or mesh (`MeshDecoderOutput`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ShapEPipelineOutput](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.ShapEImg2ImgPipeline.__call__.example">

Examples:
```py
>>> from PIL import Image
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from diffusers.utils import export_to_gif, load_image

>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

>>> repo = "openai/shap-e-img2img"
>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> guidance_scale = 3.0
>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png"
>>> image = load_image(image_url).convert("RGB")

>>> images = pipe(
...     image,
...     guidance_scale=guidance_scale,
...     num_inference_steps=64,
...     frame_size=256,
... ).images

>>> gif_path = export_to_gif(images[0], "corgi_3d.gif")
```

</ExampleCodeBlock>







</div></div>

## ShapEPipelineOutput[[diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput</name><anchor>diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/shap_e/pipeline_shap_e.py#L76</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[typing.List[PIL.Image.Image]], typing.List[typing.List[numpy.ndarray]]]"}]</parameters><paramsdesc>- **images** (`torch.Tensor`) --
  A list of images for 3D rendering.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for [ShapEPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEPipeline) and [ShapEImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEImg2ImgPipeline).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/shap_e.md" />

### Cogview4
https://huggingface.co/docs/diffusers/main/api/pipelines/cogview4.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# CogView4

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The original codebase can be found [here](https://huggingface.co/THUDM). The original weights can be found under [hf.co/THUDM](https://huggingface.co/THUDM).

## CogView4Pipeline[[diffusers.CogView4Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogView4Pipeline</name><anchor>diffusers.CogView4Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview4/pipeline_cogview4.py#L137</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": GlmModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": CogView4Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`GLMModel`) --
  Frozen text-encoder. CogView4 uses [glm-4-9b-hf](https://huggingface.co/THUDM/glm-4-9b-hf).
- **tokenizer** (`PreTrainedTokenizer`) --
  Tokenizer of class
  [PreTrainedTokenizer](https://huggingface.co/docs/transformers/main/en/main_classes/tokenizer#transformers.PreTrainedTokenizer).
- **transformer** ([CogView4Transformer2DModel](/docs/diffusers/main/en/api/models/cogview4_transformer2d#diffusers.CogView4Transformer2DModel)) --
  A text conditioned `CogView4Transformer2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using CogView4.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogView4Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview4/pipeline_cogview4.py#L402</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. If not provided, it is set to 1024.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. If not provided it is set to 1024.
- **num_inference_steps** (`int`, *optional*, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to `1`) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `224`) --
  Maximum sequence length in encoded prompt. Can be set to other values but may lead to poorer results.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.cogview4.pipeline_CogView4.CogView4PipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.cogview4.pipeline_CogView4.CogView4PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogView4Pipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CogView4Pipeline

>>> pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogView4Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview4/pipeline_cogview4.py#L221</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}, {"name": "max_sequence_length", "val": ": int = 1024"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of images that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype
- **max_sequence_length** (`int`, defaults to `1024`) --
  Maximum sequence length in encoded prompt. Can be set to other values but may lead to poorer results.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## CogView4PipelineOutput[[diffusers.pipelines.cogview4.pipeline_output.CogView4PipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.cogview4.pipeline_output.CogView4PipelineOutput</name><anchor>diffusers.pipelines.cogview4.pipeline_output.CogView4PipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview4/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for CogView3 pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/cogview4.md" />

### Cogvideox
https://huggingface.co/docs/diffusers/main/api/pipelines/cogvideox.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

<div style="float: right;">
  <div class="flex flex-wrap space-x-1">
    <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener">
      <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
    </a>
  </div>
</div>

# CogVideoX

[CogVideoX](https://huggingface.co/papers/2408.06072) is a large diffusion transformer model - available in 2B and 5B parameters - designed to generate longer and more consistent videos from text. This model uses a 3D causal variational autoencoder to more efficiently process video data by reducing sequence length (and associated training compute) and preventing flickering in generated videos. An "expert" transformer with adaptive LayerNorm improves alignment between text and video, and 3D full attention helps accurately capture motion and time in generated videos.

You can find all the original CogVideoX checkpoints under the [CogVideoX](https://huggingface.co/collections/THUDM/cogvideo-66c08e62f1685a3ade464cce) collection.

> [!TIP]
> Click on the CogVideoX models in the right sidebar for more examples of other video generation tasks.

The example below demonstrates how to generate a video optimized for memory or inference speed.

<hfoptions id="usage">
<hfoption id="memory">

Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques.

The quantized CogVideoX 5B model below requires ~16GB of VRAM.

```py
import torch
from diffusers import CogVideoXPipeline, AutoModel
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video

# quantize weights to int8 with torchao
pipeline_quant_config = PipelineQuantizationConfig(
  quant_backend="torchao",
  quant_kwargs={"quant_type": "int8wo"},
  components_to_quantize="transformer"
)

# fp8 layerwise weight-casting
transformer = AutoModel.from_pretrained(
    "THUDM/CogVideoX-5b",
    subfolder="transformer",
    torch_dtype=torch.bfloat16
)
transformer.enable_layerwise_casting(
    storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16
)

pipeline = CogVideoXPipeline.from_pretrained(
    "THUDM/CogVideoX-5b",
    transformer=transformer,
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16
)
pipeline.to("cuda")

# model-offloading
pipeline.enable_model_cpu_offload()

prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. 
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. 
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, 
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""

video = pipeline(
    prompt=prompt,
    guidance_scale=6,
    num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)
```

</hfoption>
<hfoption id="inference speed">

[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster.

The average inference time with torch.compile on a 80GB A100 is 76.27 seconds compared to 96.89 seconds for an uncompiled model.

```py
import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video

pipeline = CogVideoXPipeline.from_pretrained(
    "THUDM/CogVideoX-2b",
    torch_dtype=torch.float16
).to("cuda")

# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
    pipeline.transformer, mode="max-autotune", fullgraph=True
)

prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. 
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. 
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, 
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""

video = pipeline(
    prompt=prompt,
    guidance_scale=6,
    num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)
```

</hfoption>
</hfoptions>

## Notes

- CogVideoX supports LoRAs with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.CogVideoXLoraLoaderMixin.load_lora_weights).

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers import CogVideoXPipeline
  from diffusers.hooks import apply_group_offloading
  from diffusers.utils import export_to_video

  pipeline = CogVideoXPipeline.from_pretrained(
      "THUDM/CogVideoX-5b",
      torch_dtype=torch.bfloat16
  )
  pipeline.to("cuda")

  # load LoRA weights
  pipeline.load_lora_weights("finetrainers/CogVideoX-1.5-crush-smol-v0", adapter_name="crush-lora")
  pipeline.set_adapters("crush-lora", 0.9)

  # model-offloading
  pipeline.enable_model_cpu_offload()

  prompt = """
  PIKA_CRUSH A large metal cylinder is seen pressing down on a pile of Oreo cookies, flattening them as if they were under a hydraulic press.
  """
  negative_prompt = "inconsistent motion, blurry motion, worse quality, degenerate outputs, deformed outputs"

  video = pipeline(
      prompt=prompt, 
      negative_prompt=negative_prompt, 
      num_frames=81, 
      height=480,
      width=768,
      num_inference_steps=50
  ).frames[0]
  export_to_video(video, "output.mp4", fps=16)
  ```

  </details>

- The text-to-video (T2V) checkpoints work best with a resolution of 1360x768 because that was the resolution it was pretrained on.

- The image-to-video (I2V) checkpoints work with multiple resolutions. The width can vary from 768 to 1360, but the height must be 758. Both height and width must be divisible by 16.

- Both T2V and I2V checkpoints work best with 81 and 161 frames. It is recommended to export the generated video at 16fps.

- Refer to the table below to view memory usage when various memory-saving techniques are enabled.

  | method | memory usage (enabled) | memory usage (disabled) |
  |---|---|---|
  | enable_model_cpu_offload | 19GB | 33GB |
  | enable_sequential_cpu_offload | <4GB | ~33GB (very slow inference speed) |
  | enable_tiling | 11GB (with enable_model_cpu_offload) | --- |
 
## CogVideoXPipeline[[diffusers.CogVideoXPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXPipeline</name><anchor>diffusers.CogVideoXPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox.py#L147</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLCogVideoX"}, {"name": "transformer", "val": ": CogVideoXTransformer3DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim_cogvideox.CogVideoXDDIMScheduler, diffusers.schedulers.scheduling_dpm_cogvideox.CogVideoXDPMScheduler]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. CogVideoX uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CogVideoXTransformer3DModel](/docs/diffusers/main/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel)) --
  A text conditioned `CogVideoXTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using CogVideoX.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogVideoXPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox.py#L505</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 6"}, {"name": "use_dynamic_cfg", "val": ": bool = False"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 226"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The width in pixels of the generated image. This is set to 720 by default for the best results.
- **num_frames** (`int`, defaults to `48`) --
  Number of frames to generate. Must be divisible by self.vae_scale_factor_temporal. Generated video will
  contain 1 extra frame because CogVideoX is conditioned with (num_seconds * fps + 1) frames where
  num_seconds is 6 and fps is 8. However, since videos can be saved at any fps, the only condition that
  needs to be satisfied is that of divisibility mentioned above.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `226`) --
  Maximum sequence length in encoded prompt. Must be consistent with
  `self.transformer.config.max_text_seq_length` otherwise may lead to poor results.</paramsdesc><paramgroups>0</paramgroups><rettype>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) or `tuple`</rettype><retdesc>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogVideoXPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CogVideoXPipeline
>>> from diffusers.utils import export_to_video

>>> # Models: "THUDM/CogVideoX-2b" or "THUDM/CogVideoX-5b"
>>> pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-2b", torch_dtype=torch.float16).to("cuda")
>>> prompt = (
...     "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
...     "The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
...     "pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
...     "casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
...     "The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
...     "atmosphere of this unique musical performance."
... )
>>> video = pipe(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
>>> export_to_video(video, "output.mp4", fps=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogVideoXPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox.py#L244</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.CogVideoXPipeline.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox.py#L428</source><parameters>[]</parameters></docstring>
Enables fused QKV projections.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.CogVideoXPipeline.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox.py#L433</source><parameters>[]</parameters></docstring>
Disable QKV projection fusion if enabled.

</div></div>

## CogVideoXImageToVideoPipeline[[diffusers.CogVideoXImageToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXImageToVideoPipeline</name><anchor>diffusers.CogVideoXImageToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py#L160</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLCogVideoX"}, {"name": "transformer", "val": ": CogVideoXTransformer3DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim_cogvideox.CogVideoXDDIMScheduler, diffusers.schedulers.scheduling_dpm_cogvideox.CogVideoXDPMScheduler]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. CogVideoX uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CogVideoXTransformer3DModel](/docs/diffusers/main/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel)) --
  A text conditioned `CogVideoXTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-video generation using CogVideoX.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogVideoXImageToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py#L598</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_frames", "val": ": int = 49"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 6"}, {"name": "use_dynamic_cfg", "val": ": bool = False"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 226"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The width in pixels of the generated image. This is set to 720 by default for the best results.
- **num_frames** (`int`, defaults to `48`) --
  Number of frames to generate. Must be divisible by self.vae_scale_factor_temporal. Generated video will
  contain 1 extra frame because CogVideoX is conditioned with (num_seconds * fps + 1) frames where
  num_seconds is 6 and fps is 8. However, since videos can be saved at any fps, the only condition that
  needs to be satisfied is that of divisibility mentioned above.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `226`) --
  Maximum sequence length in encoded prompt. Must be consistent with
  `self.transformer.config.max_text_seq_length` otherwise may lead to poor results.</paramsdesc><paramgroups>0</paramgroups><rettype>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) or `tuple`</rettype><retdesc>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogVideoXImageToVideoPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import CogVideoXImageToVideoPipeline
>>> from diffusers.utils import export_to_video, load_image

>>> pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"
... )
>>> video = pipe(image, prompt, use_dynamic_cfg=True)
>>> export_to_video(video.frames[0], "output.mp4", fps=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogVideoXImageToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py#L263</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.CogVideoXImageToVideoPipeline.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py#L519</source><parameters>[]</parameters></docstring>
Enables fused QKV projections.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.CogVideoXImageToVideoPipeline.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py#L525</source><parameters>[]</parameters></docstring>
Disable QKV projection fusion if enabled.

</div></div>

## CogVideoXVideoToVideoPipeline[[diffusers.CogVideoXVideoToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXVideoToVideoPipeline</name><anchor>diffusers.CogVideoXVideoToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_video2video.py#L169</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLCogVideoX"}, {"name": "transformer", "val": ": CogVideoXTransformer3DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim_cogvideox.CogVideoXDDIMScheduler, diffusers.schedulers.scheduling_dpm_cogvideox.CogVideoXDPMScheduler]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. CogVideoX uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CogVideoXTransformer3DModel](/docs/diffusers/main/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel)) --
  A text conditioned `CogVideoXTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for video-to-video generation using CogVideoX.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogVideoXVideoToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_video2video.py#L575</source><parameters>[{"name": "video", "val": ": typing.List[PIL.Image.Image] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "guidance_scale", "val": ": float = 6"}, {"name": "use_dynamic_cfg", "val": ": bool = False"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 226"}]</parameters><paramsdesc>- **video** (`List[PIL.Image.Image]`) --
  The input video to condition the generation on. Must be a list of images/frames of the video.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The width in pixels of the generated image. This is set to 720 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Higher strength leads to more differences between original video and generated video.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `226`) --
  Maximum sequence length in encoded prompt. Must be consistent with
  `self.transformer.config.max_text_seq_length` otherwise may lead to poor results.</paramsdesc><paramgroups>0</paramgroups><rettype>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) or `tuple`</rettype><retdesc>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogVideoXVideoToVideoPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CogVideoXDPMScheduler, CogVideoXVideoToVideoPipeline
>>> from diffusers.utils import export_to_video, load_video

>>> # Models: "THUDM/CogVideoX-2b" or "THUDM/CogVideoX-5b"
>>> pipe = CogVideoXVideoToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> pipe.scheduler = CogVideoXDPMScheduler.from_config(pipe.scheduler.config)

>>> input_video = load_video(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
... )
>>> prompt = (
...     "An astronaut stands triumphantly at the peak of a towering mountain. Panorama of rugged peaks and "
...     "valleys. Very futuristic vibe and animated aesthetic. Highlights of purple and golden colors in "
...     "the scene. The sky is looks like an animated/cartoonish dream of galaxies, nebulae, stars, planets, "
...     "moons, but the remainder of the scene is mostly realistic."
... )

>>> video = pipe(
...     video=input_video, prompt=prompt, strength=0.8, guidance_scale=6, num_inference_steps=50
... ).frames[0]
>>> export_to_video(video, "output.mp4", fps=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogVideoXVideoToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_video2video.py#L269</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.CogVideoXVideoToVideoPipeline.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_video2video.py#L496</source><parameters>[]</parameters></docstring>
Enables fused QKV projections.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.CogVideoXVideoToVideoPipeline.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_video2video.py#L502</source><parameters>[]</parameters></docstring>
Disable QKV projection fusion if enabled.

</div></div>

## CogVideoXFunControlPipeline[[diffusers.CogVideoXFunControlPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXFunControlPipeline</name><anchor>diffusers.CogVideoXFunControlPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py#L154</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLCogVideoX"}, {"name": "transformer", "val": ": CogVideoXTransformer3DModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. CogVideoX uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CogVideoXTransformer3DModel](/docs/diffusers/main/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel)) --
  A text conditioned `CogVideoXTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for controlled text-to-video generation using CogVideoX Fun.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogVideoXFunControlPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py#L551</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "control_video", "val": ": typing.Optional[typing.List[PIL.Image.Image]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 6"}, {"name": "use_dynamic_cfg", "val": ": bool = False"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "control_video_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 226"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **control_video** (`List[PIL.Image.Image]`) --
  The control video to condition the generation on. Must be a list of images/frames of the video. If not
  provided, `control_video_latents` must be provided.
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_height * self.vae_scale_factor_spatial) --
  The width in pixels of the generated image. This is set to 720 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 6.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **control_video_latents** (`torch.Tensor`, *optional*) --
  Pre-generated control latents, sampled from a Gaussian distribution, to be used as inputs for
  controlled video generation. If not provided, `control_video` must be provided.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `226`) --
  Maximum sequence length in encoded prompt. Must be consistent with
  `self.transformer.config.max_text_seq_length` otherwise may lead to poor results.</paramsdesc><paramgroups>0</paramgroups><rettype>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) or `tuple`</rettype><retdesc>[CogVideoXPipelineOutput](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogVideoXFunControlPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CogVideoXFunControlPipeline, DDIMScheduler
>>> from diffusers.utils import export_to_video, load_video

>>> pipe = CogVideoXFunControlPipeline.from_pretrained(
...     "alibaba-pai/CogVideoX-Fun-V1.1-5b-Pose", torch_dtype=torch.bfloat16
... )
>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
>>> pipe.to("cuda")

>>> control_video = load_video(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
... )
>>> prompt = (
...     "An astronaut stands triumphantly at the peak of a towering mountain. Panorama of rugged peaks and "
...     "valleys. Very futuristic vibe and animated aesthetic. Highlights of purple and golden colors in "
...     "the scene. The sky is looks like an animated/cartoonish dream of galaxies, nebulae, stars, planets, "
...     "moons, but the remainder of the scene is mostly realistic."
... )

>>> video = pipe(prompt=prompt, control_video=control_video).frames[0]
>>> export_to_video(video, "output.mp4", fps=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogVideoXFunControlPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py#L253</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.CogVideoXFunControlPipeline.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py#L473</source><parameters>[]</parameters></docstring>
Enables fused QKV projections.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.CogVideoXFunControlPipeline.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_cogvideox_fun_control.py#L478</source><parameters>[]</parameters></docstring>
Disable QKV projection fusion if enabled.

</div></div>

## CogVideoXPipelineOutput[[diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput</name><anchor>diffusers.pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogvideo/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for CogVideo pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/cogvideox.md" />

### InstructPix2Pix
https://huggingface.co/docs/diffusers/main/api/pipelines/pix2pix.md

# InstructPix2Pix

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/papers/2211.09800) is by Tim Brooks, Aleksander Holynski and Alexei A. Efros.

The abstract from the paper is:

*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*

You can find additional information about InstructPix2Pix on the [project page](https://www.timothybrooks.com/instruct-pix2pix), [original codebase](https://github.com/timothybrooks/instruct-pix2pix), and try it out in a [demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionInstructPix2PixPipeline[[diffusers.StableDiffusionInstructPix2PixPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionInstructPix2PixPipeline</name><anchor>diffusers.StableDiffusionInstructPix2PixPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py#L83</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionInstructPix2PixPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py#L172</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "image_guidance_scale", "val": ": float = 1.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor` `np.ndarray`, `PIL.Image.Image`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image` or tensor representing an image batch to be repainted according to `prompt`. Can also accept
  image latents as `image`, but if passing latents directly it is not encoded again.
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **image_guidance_scale** (`float`, *optional*, defaults to 1.5) --
  Push the generated image towards the initial `image`. Image guidance scale is enabled by setting
  `image_guidance_scale > 1`. Higher image guidance scale encourages generated images that are closely
  linked to the source `image`, usually at the expense of lower image quality. This pipeline requires a
  value of at least `1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionInstructPix2PixPipeline.__call__.example">

Examples:

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionInstructPix2PixPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"

>>> image = download_image(img_url).resize((512, 512))

>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
...     "timbrooks/instruct-pix2pix", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "make the mountains snowy"
>>> image = pipe(prompt=prompt, image=image).images[0]
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionInstructPix2PixPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionInstructPix2PixPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionInstructPix2PixPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.StableDiffusionInstructPix2PixPipeline.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.StableDiffusionInstructPix2PixPipeline.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div></div>

## StableDiffusionXLInstructPix2PixPipeline[[diffusers.StableDiffusionXLInstructPix2PixPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLInstructPix2PixPipeline</name><anchor>diffusers.StableDiffusionXLInstructPix2PixPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py#L118</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "is_cosxl_edit", "val": ": typing.Optional[bool] = False"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires a aesthetic_score condition to be passed during inference. Also see the config
  of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.
- **is_cosxl_edit** (`bool`, *optional*) --
  When set the image latents are scaled.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLInstructPix2PixPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py#L610</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "image_guidance_scale", "val": ": float = 1.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`) --
  The image(s) to modify with the pipeline.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **image_guidance_scale** (`float`, *optional*, defaults to 1.5) --
  Image guidance scale is to push the generated image towards the initial image `image`. Image guidance
  scale is enabled by setting `image_guidance_scale > 1`. Higher image guidance scale encourages to
  generate images that are closely linked to the source image `image`, usually at the expense of lower
  image quality. This pipeline requires a value of at least `1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLInstructPix2PixPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline
>>> from diffusers.utils import load_image

>>> resolution = 768
>>> image = load_image(
...     "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
... ).resize((resolution, resolution))
>>> edit_instruction = "Turn sky into a cloudy one"

>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(
...     "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16
... ).to("cuda")

>>> edited_image = pipe(
...     prompt=edit_instruction,
...     image=image,
...     height=resolution,
...     width=resolution,
...     guidance_scale=3.0,
...     image_guidance_scale=1.5,
...     num_inference_steps=30,
... ).images[0]
>>> edited_image
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLInstructPix2PixPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py#L218</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/pix2pix.md" />

### FluxControlInpaint
https://huggingface.co/docs/diffusers/main/api/pipelines/control_flux_inpaint.md

# FluxControlInpaint

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

FluxControlInpaintPipeline is an implementation of Inpainting for Flux.1 Depth/Canny models. It is a pipeline that allows you to inpaint images using the Flux.1 Depth/Canny models. The pipeline takes an image and a mask as input and returns the inpainted image.

FLUX.1 Depth and Canny [dev] is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. **This is not a ControlNet model**.

| Control type | Developer | Link |
| -------- | ---------- | ---- |
| Depth | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev) |
| Canny | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev) |


> [!TIP]
> Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to [this blog post](https://huggingface.co/blog/quanto-diffusers) to learn more. For an exhaustive list of resources, check out [this gist](https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c).

```python
import torch
from diffusers import FluxControlInpaintPipeline
from diffusers.models.transformers import FluxTransformer2DModel
from transformers import T5EncoderModel
from diffusers.utils import load_image, make_image_grid
from image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux
from PIL import Image
import numpy as np

pipe = FluxControlInpaintPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Depth-dev",
    torch_dtype=torch.bfloat16,
)
# use following lines if you have GPU constraints
# ---------------------------------------------------------------
transformer = FluxTransformer2DModel.from_pretrained(
    "sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16
)
text_encoder_2 = T5EncoderModel.from_pretrained(
    "sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16
)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
# ---------------------------------------------------------------
pipe.to("cuda")

prompt = "a blue robot singing opera with human-like expressions"
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

head_mask = np.zeros_like(image)
head_mask[65:580,300:642] = 255
mask_image = Image.fromarray(head_mask)

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(image)[0].convert("RGB")

output = pipe(
    prompt=prompt,
    image=image,
    control_image=control_image,
    mask_image=mask_image,
    num_inference_steps=30,
    strength=0.9,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save("output.png")
```

## FluxControlInpaintPipeline[[diffusers.FluxControlInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlInpaintPipeline</name><anchor>diffusers.FluxControlInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L205</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for image inpainting using Flux-dev-Depth/Canny.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L805</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will be generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlInpaintPipeline.__call__.example">

Examples:
```py
import torch
from diffusers import FluxControlInpaintPipeline
from diffusers.models.transformers import FluxTransformer2DModel
from transformers import T5EncoderModel
from diffusers.utils import load_image, make_image_grid
from image_gen_aux import DepthPreprocessor  # https://github.com/huggingface/image_gen_aux
from PIL import Image
import numpy as np

pipe = FluxControlInpaintPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Depth-dev",
    torch_dtype=torch.bfloat16,
)
# use following lines if you have GPU constraints
# ---------------------------------------------------------------
transformer = FluxTransformer2DModel.from_pretrained(
    "sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16
)
text_encoder_2 = T5EncoderModel.from_pretrained(
    "sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16
)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
# ---------------------------------------------------------------
pipe.to("cuda")

prompt = "a blue robot singing opera with human-like expressions"
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

head_mask = np.zeros_like(image)
head_mask[65:580, 300:642] = 255
mask_image = Image.fromarray(head_mask)

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(image)[0].convert("RGB")

output = pipe(
    prompt=prompt,
    image=image,
    control_image=control_image,
    mask_image=mask_image,
    num_inference_steps=30,
    strength=0.9,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save(
    "output.png"
)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxControlInpaintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L589</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxControlInpaintPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L616</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxControlInpaintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L576</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxControlInpaintPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L602</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py#L375</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxPipelineOutput[[diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput</name><anchor>diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_output.py#L12</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `torch.Tensor` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array or torch tensor of shape `(batch_size,
  height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion
  pipeline. Torch tensors can represent either the denoised images or the intermediate latents ready to be
  passed to the decoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Flux image generation pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/control_flux_inpaint.md" />

### Marigold Computer Vision
https://huggingface.co/docs/diffusers/main/api/pipelines/marigold.md

# Marigold Computer Vision

![marigold](https://marigoldmonodepth.github.io/images/teaser_collage_compressed.jpg)

Marigold was proposed in 
[Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation](https://huggingface.co/papers/2312.02145), 
a CVPR 2024 Oral paper by 
[Bingxin Ke](http://www.kebingxin.com/), 
[Anton Obukhov](https://www.obukhov.ai/), 
[Shengyu Huang](https://shengyuh.github.io/), 
[Nando Metzger](https://nandometzger.github.io/), 
[Rodrigo Caye Daudt](https://rcdaudt.github.io/), and 
[Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en).
The core idea is to **repurpose the generative prior of Text-to-Image Latent Diffusion Models (LDMs) for traditional 
computer vision tasks**.
This approach was explored by fine-tuning Stable Diffusion for **Monocular Depth Estimation**, as demonstrated in the 
teaser above.

Marigold was later extended in the follow-up paper, 
[Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis](https://huggingface.co/papers/2312.02145), 
authored by 
[Bingxin Ke](http://www.kebingxin.com/), 
[Kevin Qu](https://www.linkedin.com/in/kevin-qu-b3417621b/?locale=en_US), 
[Tianfu Wang](https://tianfwang.github.io/), 
[Nando Metzger](https://nandometzger.github.io/), 
[Shengyu Huang](https://shengyuh.github.io/), 
[Bo Li](https://www.linkedin.com/in/bobboli0202/), 
[Anton Obukhov](https://www.obukhov.ai/), and 
[Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en).
This work expanded Marigold to support new modalities such as **Surface Normals** and **Intrinsic Image Decomposition** 
(IID), introduced a training protocol for **Latent Consistency Models** (LCM), and demonstrated **High-Resolution** (HR) 
processing capability.

> [!TIP]
> The early Marigold models (`v1-0` and earlier) were optimized for best results with at least 10 inference steps.
> LCM models were later developed to enable high-quality inference in just 1 to 4 steps.
> Marigold models `v1-1` and later use the DDIM scheduler to achieve optimal 
> results in as few as 1 to 4 steps.

## Available Pipelines

Each pipeline is tailored for a specific computer vision task, processing an input RGB image and generating a 
corresponding prediction.
Currently, the following computer vision tasks are implemented:

| Pipeline                                                                                                                                          | Recommended Model Checkpoints                                                                                                                                                                           |                              Spaces (Interactive Apps)                               | Predicted Modalities                                                                                                                                                               |
|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [MarigoldDepthPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py)           | [prs-eth/marigold-depth-v1-1](https://huggingface.co/prs-eth/marigold-depth-v1-1)                                                                                                                       |          [Depth Estimation](https://huggingface.co/spaces/prs-eth/marigold)          | [Depth](https://en.wikipedia.org/wiki/Depth_map), [Disparity](https://en.wikipedia.org/wiki/Binocular_disparity)                                                                   |
| [MarigoldNormalsPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py)       | [prs-eth/marigold-normals-v1-1](https://huggingface.co/prs-eth/marigold-normals-v1-1)                                                                                                                   | [Surface Normals Estimation](https://huggingface.co/spaces/prs-eth/marigold-normals) | [Surface normals](https://en.wikipedia.org/wiki/Normal_mapping)                                                                                                                    |
| [MarigoldIntrinsicsPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_intrinsics.py) | [prs-eth/marigold-iid-appearance-v1-1](https://huggingface.co/prs-eth/marigold-iid-appearance-v1-1),<br>[prs-eth/marigold-iid-lighting-v1-1](https://huggingface.co/prs-eth/marigold-iid-lighting-v1-1) | [Intrinsic Image Decomposition](https://huggingface.co/spaces/prs-eth/marigold-iid)  | [Albedo](https://en.wikipedia.org/wiki/Albedo), [Materials](https://www.n.aiq3d.com/wiki/roughnessmetalnessao-map), [Lighting](https://en.wikipedia.org/wiki/Diffuse_reflection)   |

## Available Checkpoints

All original checkpoints are available under the [PRS-ETH](https://huggingface.co/prs-eth/) organization on Hugging Face.
They are designed for use with diffusers pipelines and the [original codebase](https://github.com/prs-eth/marigold), which can also be used to train 
new model checkpoints.
The following is a summary of the recommended checkpoints, all of which produce reliable results with 1 to 4 steps. 

| Checkpoint                                                                                          | Modality     | Comment                                                                                                                                                                              |
|-----------------------------------------------------------------------------------------------------|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [prs-eth/marigold-depth-v1-1](https://huggingface.co/prs-eth/marigold-depth-v1-1)                   | Depth        | Affine-invariant depth prediction assigns each pixel a value between 0 (near plane) and 1 (far plane), with both planes determined by the model during inference.                    |
| [prs-eth/marigold-normals-v0-1](https://huggingface.co/prs-eth/marigold-normals-v0-1)               | Normals      | The surface normals predictions are unit-length 3D vectors in the screen space camera, with values in the range from -1 to 1.                                                        |
| [prs-eth/marigold-iid-appearance-v1-1](https://huggingface.co/prs-eth/marigold-iid-appearance-v1-1) | Intrinsics   | InteriorVerse decomposition is comprised of Albedo and two BRDF material properties: Roughness and Metallicity.                                                                      | 
| [prs-eth/marigold-iid-lighting-v1-1](https://huggingface.co/prs-eth/marigold-iid-lighting-v1-1)     | Intrinsics   | HyperSim decomposition of an image $I$ is comprised of Albedo $A$, Diffuse shading $S$, and Non-diffuse residual $R$: $I = A*S+R$. |

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff 
> between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to 
> efficiently load the same components into multiple pipelines. 
> Also, to know more about reducing the memory usage of this pipeline, refer to the ["Reduce memory usage"] section 
> [here](../../using-diffusers/svd#reduce-memory-usage).

> [!WARNING]
> Marigold pipelines were designed and tested with the scheduler embedded in the model checkpoint.
> The optimal number of inference steps varies by scheduler, with no universal value that works best across all cases.
> To accommodate this, the `num_inference_steps` parameter in the pipeline's `__call__` method defaults to `None` (see the 
> API reference).
> Unless set explicitly, it inherits the value from the `default_denoising_steps` field in the checkpoint configuration 
> file (`model_index.json`).
> This ensures high-quality predictions when invoking the pipeline with only the `image` argument.

See also Marigold [usage examples](../../using-diffusers/marigold_usage).

## Marigold Depth Prediction API[[diffusers.MarigoldDepthPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MarigoldDepthPipeline</name><anchor>diffusers.MarigoldDepthPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py#L104</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_lcm.LCMScheduler]"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "prediction_type", "val": ": typing.Optional[str] = None"}, {"name": "scale_invariant", "val": ": typing.Optional[bool] = True"}, {"name": "shift_invariant", "val": ": typing.Optional[bool] = True"}, {"name": "default_denoising_steps", "val": ": typing.Optional[int] = None"}, {"name": "default_processing_resolution", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  Conditional U-Net to denoise the depth latent, conditioned on image latent.
- **vae** (`AutoencoderKL`) --
  Variational Auto-Encoder (VAE) Model to encode and decode images and predictions to and from latent
  representations.
- **scheduler** (`DDIMScheduler` or `LCMScheduler`) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **text_encoder** (`CLIPTextModel`) --
  Text-encoder, for empty text embedding.
- **tokenizer** (`CLIPTokenizer`) --
  CLIP tokenizer.
- **prediction_type** (`str`, *optional*) --
  Type of predictions made by the model.
- **scale_invariant** (`bool`, *optional*) --
  A model property specifying whether the predicted depth maps are scale-invariant. This value must be set in
  the model config. When used together with the `shift_invariant=True` flag, the model is also called
  "affine-invariant". NB: overriding this value is not supported.
- **shift_invariant** (`bool`, *optional*) --
  A model property specifying whether the predicted depth maps are shift-invariant. This value must be set in
  the model config. When used together with the `scale_invariant=True` flag, the model is also called
  "affine-invariant". NB: overriding this value is not supported.
- **default_denoising_steps** (`int`, *optional*) --
  The minimum number of denoising diffusion steps that are required to produce a prediction of reasonable
  quality with the given model. This value must be set in the model config. When the pipeline is called
  without explicitly setting `num_inference_steps`, the default value is used. This is required to ensure
  reasonable results with various model flavors compatible with the pipeline, such as those relying on very
  short denoising schedules (`LCMScheduler`) and those with full diffusion schedules (`DDIMScheduler`).
- **default_processing_resolution** (`int`, *optional*) --
  The recommended value of the `processing_resolution` parameter of the pipeline. This value must be set in
  the model config. When the pipeline is called without explicitly setting `processing_resolution`, the
  default value is used. This is required to ensure reasonable results with various model flavors trained
  with varying optimal processing resolution values.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for monocular depth estimation using the Marigold method: https://marigoldmonodepth.github.io.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.MarigoldDepthPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py#L347</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "ensemble_size", "val": ": int = 1"}, {"name": "processing_resolution", "val": ": typing.Optional[int] = None"}, {"name": "match_input_resolution", "val": ": bool = True"}, {"name": "resample_method_input", "val": ": str = 'bilinear'"}, {"name": "resample_method_output", "val": ": str = 'bilinear'"}, {"name": "batch_size", "val": ": int = 1"}, {"name": "ensembling_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "latents", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": str = 'np'"}, {"name": "output_uncertainty", "val": ": bool = False"}, {"name": "output_latent", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`), --
  `List[torch.Tensor]`: An input image or images used as an input for the depth estimation task. For
  arrays and tensors, the expected value range is between `[0, 1]`. Passing a batch of images is possible
  by providing a four-dimensional array or a tensor. Additionally, a list of images of two- or
  three-dimensional arrays or tensors can be passed. In the latter case, all list elements must have the
  same width and height.
- **num_inference_steps** (`int`, *optional*, defaults to `None`) --
  Number of denoising diffusion steps during inference. The default value `None` results in automatic
  selection.
- **ensemble_size** (`int`, defaults to `1`) --
  Number of ensemble predictions. Higher values result in measurable improvements and visual degradation.
- **processing_resolution** (`int`, *optional*, defaults to `None`) --
  Effective processing resolution. When set to `0`, matches the larger input image dimension. This
  produces crisper predictions, but may also lead to the overall loss of global context. The default
  value `None` resolves to the optimal value from the model config.
- **match_input_resolution** (`bool`, *optional*, defaults to `True`) --
  When enabled, the output prediction is resized to match the input dimensions. When disabled, the longer
  side of the output will equal to `processing_resolution`.
- **resample_method_input** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize input images to `processing_resolution`. The accepted values are:
  `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **resample_method_output** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize output predictions to match the input resolution. The accepted values
  are `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **batch_size** (`int`, *optional*, defaults to `1`) --
  Batch size; only matters when setting `ensemble_size` or passing a tensor of images.
- **ensembling_kwargs** (`dict`, *optional*, defaults to `None`) --
  Extra dictionary with arguments for precise ensembling control. The following options are available:
  - reduction (`str`, *optional*, defaults to `"median"`): Defines the ensembling function applied in
    every pixel location, can be either `"median"` or `"mean"`.
  - regularizer_strength (`float`, *optional*, defaults to `0.02`): Strength of the regularizer that
    pulls the aligned predictions to the unit range from 0 to 1.
  - max_iter (`int`, *optional*, defaults to `2`): Maximum number of the alignment solver steps. Refer to
    `scipy.optimize.minimize` function, `options` argument.
  - tol (`float`, *optional*, defaults to `1e-3`): Alignment solver tolerance. The solver stops when the
    tolerance is reached.
  - max_res (`int`, *optional*, defaults to `None`): Resolution at which the alignment is performed;
    `None` matches the `processing_resolution`.
- **latents** (`torch.Tensor`, or `List[torch.Tensor]`, *optional*, defaults to `None`) --
  Latent noise tensors to replace the random initialization. These can be taken from the previous
  function call's output.
- **generator** (`torch.Generator`, or `List[torch.Generator]`, *optional*, defaults to `None`) --
  Random number generator object to ensure reproducibility.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  Preferred format of the output's `prediction` and the optional `uncertainty` fields. The accepted
  values are: `"np"` (numpy array) or `"pt"` (torch tensor).
- **output_uncertainty** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `uncertainty` field contains the predictive uncertainty map, provided that
  the `ensemble_size` argument is set to a value above 2.
- **output_latent** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `latent` field contains the latent codes corresponding to the predictions
  within the ensemble. These codes can be saved, modified, and used for subsequent calls with the
  `latents` argument.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [MarigoldDepthOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldDepthOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[MarigoldDepthOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldDepthOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [MarigoldDepthOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldDepthOutput) is returned, otherwise a
`tuple` is returned where the first element is the prediction, the second element is the uncertainty
(or `None`), and the third is the latent (or `None`).</retdesc></docstring>

Function invoked when calling the pipeline.



<ExampleCodeBlock anchor="diffusers.MarigoldDepthPipeline.__call__.example">

Examples:
```py
>>> import diffusers
>>> import torch

>>> pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
...     "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
... ).to("cuda")

>>> image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
>>> depth = pipe(image)

>>> vis = pipe.image_processor.visualize_depth(depth.prediction)
>>> vis[0].save("einstein_depth.png")

>>> depth_16bit = pipe.image_processor.export_depth_to_16bit_png(depth.prediction)
>>> depth_16bit[0].save("einstein_depth_16bit.png")
```

</ExampleCodeBlock>







</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.marigold.MarigoldDepthOutput</name><anchor>diffusers.pipelines.marigold.MarigoldDepthOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py#L83</source><parameters>[{"name": "prediction", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}, {"name": "uncertainty", "val": ": typing.Union[NoneType, numpy.ndarray, torch.Tensor]"}, {"name": "latent", "val": ": typing.Optional[torch.Tensor]"}]</parameters><paramsdesc>- **prediction** (`np.ndarray`, `torch.Tensor`) --
  Predicted depth maps with values in the range [0, 1]. The shape is `numimages × 1 × height × width` for
  `torch.Tensor` or `numimages × height × width × 1` for `np.ndarray`.
- **uncertainty** (`None`, `np.ndarray`, `torch.Tensor`) --
  Uncertainty maps computed from the ensemble, with values in the range [0, 1]. The shape is `numimages × 1 ×
  height × width` for `torch.Tensor` or `numimages × height × width × 1` for `np.ndarray`.
- **latent** (`None`, `torch.Tensor`) --
  Latent features corresponding to the predictions, compatible with the `latents` argument of the pipeline.
  The shape is `numimages * numensemble × 4 × latentheight × latentwidth`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Marigold monocular depth prediction pipeline.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_depth</name><anchor>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_depth</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/marigold_image_processing.py#L387</source><parameters>[{"name": "depth", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "val_min", "val": ": float = 0.0"}, {"name": "val_max", "val": ": float = 1.0"}, {"name": "color_map", "val": ": str = 'Spectral'"}]</parameters><paramsdesc>- **depth** (`Union[PIL.Image.Image, np.ndarray, torch.Tensor, List[PIL.Image.Image], List[np.ndarray], --
  List[torch.Tensor]]`): Depth maps.
- **val_min** (`float`, *optional*, defaults to `0.0`) -- Minimum value of the visualized depth range.
- **val_max** (`float`, *optional*, defaults to `1.0`) -- Maximum value of the visualized depth range.
- **color_map** (`str`, *optional*, defaults to `"Spectral"`) -- Color map used to convert a single-channel
  depth prediction into colored representation.</paramsdesc><paramgroups>0</paramgroups></docstring>

Visualizes depth maps, such as predictions of the `MarigoldDepthPipeline`.



Returns: `List[PIL.Image.Image]` with depth maps visualization.


</div>

## Marigold Normals Estimation API[[diffusers.MarigoldNormalsPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MarigoldNormalsPipeline</name><anchor>diffusers.MarigoldNormalsPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py#L99</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_lcm.LCMScheduler]"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "prediction_type", "val": ": typing.Optional[str] = None"}, {"name": "use_full_z_range", "val": ": typing.Optional[bool] = True"}, {"name": "default_denoising_steps", "val": ": typing.Optional[int] = None"}, {"name": "default_processing_resolution", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  Conditional U-Net to denoise the normals latent, conditioned on image latent.
- **vae** (`AutoencoderKL`) --
  Variational Auto-Encoder (VAE) Model to encode and decode images and predictions to and from latent
  representations.
- **scheduler** (`DDIMScheduler` or `LCMScheduler`) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **text_encoder** (`CLIPTextModel`) --
  Text-encoder, for empty text embedding.
- **tokenizer** (`CLIPTokenizer`) --
  CLIP tokenizer.
- **prediction_type** (`str`, *optional*) --
  Type of predictions made by the model.
- **use_full_z_range** (`bool`, *optional*) --
  Whether the normals predicted by this model utilize the full range of the Z dimension, or only its positive
  half.
- **default_denoising_steps** (`int`, *optional*) --
  The minimum number of denoising diffusion steps that are required to produce a prediction of reasonable
  quality with the given model. This value must be set in the model config. When the pipeline is called
  without explicitly setting `num_inference_steps`, the default value is used. This is required to ensure
  reasonable results with various model flavors compatible with the pipeline, such as those relying on very
  short denoising schedules (`LCMScheduler`) and those with full diffusion schedules (`DDIMScheduler`).
- **default_processing_resolution** (`int`, *optional*) --
  The recommended value of the `processing_resolution` parameter of the pipeline. This value must be set in
  the model config. When the pipeline is called without explicitly setting `processing_resolution`, the
  default value is used. This is required to ensure reasonable results with various model flavors trained
  with varying optimal processing resolution values.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for monocular normals estimation using the Marigold method: https://marigoldmonodepth.github.io.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.MarigoldNormalsPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py#L332</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "ensemble_size", "val": ": int = 1"}, {"name": "processing_resolution", "val": ": typing.Optional[int] = None"}, {"name": "match_input_resolution", "val": ": bool = True"}, {"name": "resample_method_input", "val": ": str = 'bilinear'"}, {"name": "resample_method_output", "val": ": str = 'bilinear'"}, {"name": "batch_size", "val": ": int = 1"}, {"name": "ensembling_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "latents", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": str = 'np'"}, {"name": "output_uncertainty", "val": ": bool = False"}, {"name": "output_latent", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`), --
  `List[torch.Tensor]`: An input image or images used as an input for the normals estimation task. For
  arrays and tensors, the expected value range is between `[0, 1]`. Passing a batch of images is possible
  by providing a four-dimensional array or a tensor. Additionally, a list of images of two- or
  three-dimensional arrays or tensors can be passed. In the latter case, all list elements must have the
  same width and height.
- **num_inference_steps** (`int`, *optional*, defaults to `None`) --
  Number of denoising diffusion steps during inference. The default value `None` results in automatic
  selection.
- **ensemble_size** (`int`, defaults to `1`) --
  Number of ensemble predictions. Higher values result in measurable improvements and visual degradation.
- **processing_resolution** (`int`, *optional*, defaults to `None`) --
  Effective processing resolution. When set to `0`, matches the larger input image dimension. This
  produces crisper predictions, but may also lead to the overall loss of global context. The default
  value `None` resolves to the optimal value from the model config.
- **match_input_resolution** (`bool`, *optional*, defaults to `True`) --
  When enabled, the output prediction is resized to match the input dimensions. When disabled, the longer
  side of the output will equal to `processing_resolution`.
- **resample_method_input** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize input images to `processing_resolution`. The accepted values are:
  `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **resample_method_output** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize output predictions to match the input resolution. The accepted values
  are `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **batch_size** (`int`, *optional*, defaults to `1`) --
  Batch size; only matters when setting `ensemble_size` or passing a tensor of images.
- **ensembling_kwargs** (`dict`, *optional*, defaults to `None`) --
  Extra dictionary with arguments for precise ensembling control. The following options are available:
  - reduction (`str`, *optional*, defaults to `"closest"`): Defines the ensembling function applied in
    every pixel location, can be either `"closest"` or `"mean"`.
- **latents** (`torch.Tensor`, *optional*, defaults to `None`) --
  Latent noise tensors to replace the random initialization. These can be taken from the previous
  function call's output.
- **generator** (`torch.Generator`, or `List[torch.Generator]`, *optional*, defaults to `None`) --
  Random number generator object to ensure reproducibility.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  Preferred format of the output's `prediction` and the optional `uncertainty` fields. The accepted
  values are: `"np"` (numpy array) or `"pt"` (torch tensor).
- **output_uncertainty** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `uncertainty` field contains the predictive uncertainty map, provided that
  the `ensemble_size` argument is set to a value above 2.
- **output_latent** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `latent` field contains the latent codes corresponding to the predictions
  within the ensemble. These codes can be saved, modified, and used for subsequent calls with the
  `latents` argument.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [MarigoldNormalsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldNormalsOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[MarigoldNormalsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldNormalsOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [MarigoldNormalsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldNormalsOutput) is returned, otherwise a
`tuple` is returned where the first element is the prediction, the second element is the uncertainty
(or `None`), and the third is the latent (or `None`).</retdesc></docstring>

Function invoked when calling the pipeline.



<ExampleCodeBlock anchor="diffusers.MarigoldNormalsPipeline.__call__.example">

Examples:
```py
>>> import diffusers
>>> import torch

>>> pipe = diffusers.MarigoldNormalsPipeline.from_pretrained(
...     "prs-eth/marigold-normals-v1-1", variant="fp16", torch_dtype=torch.float16
... ).to("cuda")

>>> image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
>>> normals = pipe(image)

>>> vis = pipe.image_processor.visualize_normals(normals.prediction)
>>> vis[0].save("einstein_normals.png")
```

</ExampleCodeBlock>







</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.marigold.MarigoldNormalsOutput</name><anchor>diffusers.pipelines.marigold.MarigoldNormalsOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py#L78</source><parameters>[{"name": "prediction", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}, {"name": "uncertainty", "val": ": typing.Union[NoneType, numpy.ndarray, torch.Tensor]"}, {"name": "latent", "val": ": typing.Optional[torch.Tensor]"}]</parameters><paramsdesc>- **prediction** (`np.ndarray`, `torch.Tensor`) --
  Predicted normals with values in the range [-1, 1]. The shape is `numimages × 3 × height × width` for
  `torch.Tensor` or `numimages × height × width × 3` for `np.ndarray`.
- **uncertainty** (`None`, `np.ndarray`, `torch.Tensor`) --
  Uncertainty maps computed from the ensemble, with values in the range [0, 1]. The shape is `numimages × 1 ×
  height × width` for `torch.Tensor` or `numimages × height × width × 1` for `np.ndarray`.
- **latent** (`None`, `torch.Tensor`) --
  Latent features corresponding to the predictions, compatible with the `latents` argument of the pipeline.
  The shape is `numimages * numensemble × 4 × latentheight × latentwidth`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Marigold monocular normals prediction pipeline.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_normals</name><anchor>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_normals</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/marigold_image_processing.py#L488</source><parameters>[{"name": "normals", "val": ": typing.Union[numpy.ndarray, torch.Tensor, typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "flip_x", "val": ": bool = False"}, {"name": "flip_y", "val": ": bool = False"}, {"name": "flip_z", "val": ": bool = False"}]</parameters><paramsdesc>- **normals** (`Union[np.ndarray, torch.Tensor, List[np.ndarray], List[torch.Tensor]]`) --
  Surface normals.
- **flip_x** (`bool`, *optional*, defaults to `False`) -- Flips the X axis of the normals frame of reference.
  Default direction is right.
- **flip_y** (`bool`, *optional*, defaults to `False`) --  Flips the Y axis of the normals frame of reference.
  Default direction is top.
- **flip_z** (`bool`, *optional*, defaults to `False`) -- Flips the Z axis of the normals frame of reference.
  Default direction is facing the observer.</paramsdesc><paramgroups>0</paramgroups></docstring>

Visualizes surface normals, such as predictions of the `MarigoldNormalsPipeline`.



Returns: `List[PIL.Image.Image]` with surface normals visualization.


</div>

## Marigold Intrinsic Image Decomposition API[[diffusers.MarigoldIntrinsicsPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MarigoldIntrinsicsPipeline</name><anchor>diffusers.MarigoldIntrinsicsPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_intrinsics.py#L120</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_lcm.LCMScheduler]"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "prediction_type", "val": ": typing.Optional[str] = None"}, {"name": "target_properties", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "default_denoising_steps", "val": ": typing.Optional[int] = None"}, {"name": "default_processing_resolution", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  Conditional U-Net to denoise the targets latent, conditioned on image latent.
- **vae** (`AutoencoderKL`) --
  Variational Auto-Encoder (VAE) Model to encode and decode images and predictions to and from latent
  representations.
- **scheduler** (`DDIMScheduler` or `LCMScheduler`) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **text_encoder** (`CLIPTextModel`) --
  Text-encoder, for empty text embedding.
- **tokenizer** (`CLIPTokenizer`) --
  CLIP tokenizer.
- **prediction_type** (`str`, *optional*) --
  Type of predictions made by the model.
- **target_properties** (`Dict[str, Any]`, *optional*) --
  Properties of the predicted modalities, such as `target_names`, a `List[str]` used to define the number,
  order and names of the predicted modalities, and any other metadata that may be required to interpret the
  predictions.
- **default_denoising_steps** (`int`, *optional*) --
  The minimum number of denoising diffusion steps that are required to produce a prediction of reasonable
  quality with the given model. This value must be set in the model config. When the pipeline is called
  without explicitly setting `num_inference_steps`, the default value is used. This is required to ensure
  reasonable results with various model flavors compatible with the pipeline, such as those relying on very
  short denoising schedules (`LCMScheduler`) and those with full diffusion schedules (`DDIMScheduler`).
- **default_processing_resolution** (`int`, *optional*) --
  The recommended value of the `processing_resolution` parameter of the pipeline. This value must be set in
  the model config. When the pipeline is called without explicitly setting `processing_resolution`, the
  default value is used. This is required to ensure reasonable results with various model flavors trained
  with varying optimal processing resolution values.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Intrinsic Image Decomposition (IID) using the Marigold method:
https://marigoldcomputervision.github.io.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.MarigoldIntrinsicsPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_intrinsics.py#L359</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "ensemble_size", "val": ": int = 1"}, {"name": "processing_resolution", "val": ": typing.Optional[int] = None"}, {"name": "match_input_resolution", "val": ": bool = True"}, {"name": "resample_method_input", "val": ": str = 'bilinear'"}, {"name": "resample_method_output", "val": ": str = 'bilinear'"}, {"name": "batch_size", "val": ": int = 1"}, {"name": "ensembling_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "latents", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": str = 'np'"}, {"name": "output_uncertainty", "val": ": bool = False"}, {"name": "output_latent", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`), --
  `List[torch.Tensor]`: An input image or images used as an input for the intrinsic decomposition task.
  For arrays and tensors, the expected value range is between `[0, 1]`. Passing a batch of images is
  possible by providing a four-dimensional array or a tensor. Additionally, a list of images of two- or
  three-dimensional arrays or tensors can be passed. In the latter case, all list elements must have the
  same width and height.
- **num_inference_steps** (`int`, *optional*, defaults to `None`) --
  Number of denoising diffusion steps during inference. The default value `None` results in automatic
  selection.
- **ensemble_size** (`int`, defaults to `1`) --
  Number of ensemble predictions. Higher values result in measurable improvements and visual degradation.
- **processing_resolution** (`int`, *optional*, defaults to `None`) --
  Effective processing resolution. When set to `0`, matches the larger input image dimension. This
  produces crisper predictions, but may also lead to the overall loss of global context. The default
  value `None` resolves to the optimal value from the model config.
- **match_input_resolution** (`bool`, *optional*, defaults to `True`) --
  When enabled, the output prediction is resized to match the input dimensions. When disabled, the longer
  side of the output will equal to `processing_resolution`.
- **resample_method_input** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize input images to `processing_resolution`. The accepted values are:
  `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **resample_method_output** (`str`, *optional*, defaults to `"bilinear"`) --
  Resampling method used to resize output predictions to match the input resolution. The accepted values
  are `"nearest"`, `"nearest-exact"`, `"bilinear"`, `"bicubic"`, or `"area"`.
- **batch_size** (`int`, *optional*, defaults to `1`) --
  Batch size; only matters when setting `ensemble_size` or passing a tensor of images.
- **ensembling_kwargs** (`dict`, *optional*, defaults to `None`) --
  Extra dictionary with arguments for precise ensembling control. The following options are available:
  - reduction (`str`, *optional*, defaults to `"median"`): Defines the ensembling function applied in
    every pixel location, can be either `"median"` or `"mean"`.
- **latents** (`torch.Tensor`, *optional*, defaults to `None`) --
  Latent noise tensors to replace the random initialization. These can be taken from the previous
  function call's output.
- **generator** (`torch.Generator`, or `List[torch.Generator]`, *optional*, defaults to `None`) --
  Random number generator object to ensure reproducibility.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  Preferred format of the output's `prediction` and the optional `uncertainty` fields. The accepted
  values are: `"np"` (numpy array) or `"pt"` (torch tensor).
- **output_uncertainty** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `uncertainty` field contains the predictive uncertainty map, provided that
  the `ensemble_size` argument is set to a value above 2.
- **output_latent** (`bool`, *optional*, defaults to `False`) --
  When enabled, the output's `latent` field contains the latent codes corresponding to the predictions
  within the ensemble. These codes can be saved, modified, and used for subsequent calls with the
  `latents` argument.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [MarigoldIntrinsicsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldIntrinsicsOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[MarigoldIntrinsicsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldIntrinsicsOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [MarigoldIntrinsicsOutput](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldIntrinsicsOutput) is returned, otherwise a
`tuple` is returned where the first element is the prediction, the second element is the uncertainty
(or `None`), and the third is the latent (or `None`).</retdesc></docstring>

Function invoked when calling the pipeline.



<ExampleCodeBlock anchor="diffusers.MarigoldIntrinsicsPipeline.__call__.example">

Examples:
```py
>>> import diffusers
>>> import torch

>>> pipe = diffusers.MarigoldIntrinsicsPipeline.from_pretrained(
...     "prs-eth/marigold-iid-appearance-v1-1", variant="fp16", torch_dtype=torch.float16
... ).to("cuda")

>>> image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
>>> intrinsics = pipe(image)

>>> vis = pipe.image_processor.visualize_intrinsics(intrinsics.prediction, pipe.target_properties)
>>> vis[0]["albedo"].save("einstein_albedo.png")
>>> vis[0]["roughness"].save("einstein_roughness.png")
>>> vis[0]["metallicity"].save("einstein_metallicity.png")
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="diffusers.MarigoldIntrinsicsPipeline.__call__.example-2">

```py
>>> import diffusers
>>> import torch

>>> pipe = diffusers.MarigoldIntrinsicsPipeline.from_pretrained(
...     "prs-eth/marigold-iid-lighting-v1-1", variant="fp16", torch_dtype=torch.float16
... ).to("cuda")

>>> image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
>>> intrinsics = pipe(image)

>>> vis = pipe.image_processor.visualize_intrinsics(intrinsics.prediction, pipe.target_properties)
>>> vis[0]["albedo"].save("einstein_albedo.png")
>>> vis[0]["shading"].save("einstein_shading.png")
>>> vis[0]["residual"].save("einstein_residual.png")
```

</ExampleCodeBlock>







</div></div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.marigold.MarigoldIntrinsicsOutput</name><anchor>diffusers.pipelines.marigold.MarigoldIntrinsicsOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_intrinsics.py#L96</source><parameters>[{"name": "prediction", "val": ": typing.Union[numpy.ndarray, torch.Tensor]"}, {"name": "uncertainty", "val": ": typing.Union[NoneType, numpy.ndarray, torch.Tensor]"}, {"name": "latent", "val": ": typing.Optional[torch.Tensor]"}]</parameters><paramsdesc>- **prediction** (`np.ndarray`, `torch.Tensor`) --
  Predicted image intrinsics with values in the range [0, 1]. The shape is `(numimages * numtargets) × 3 ×
  height × width` for `torch.Tensor` or `(numimages * numtargets) × height × width × 3` for `np.ndarray`,
  where `numtargets` corresponds to the number of predicted target modalities of the intrinsic image
  decomposition.
- **uncertainty** (`None`, `np.ndarray`, `torch.Tensor`) --
  Uncertainty maps computed from the ensemble, with values in the range [0, 1]. The shape is `(numimages *
  numtargets) × 3 × height × width` for `torch.Tensor` or `(numimages * numtargets) × height × width × 3` for
  `np.ndarray`.
- **latent** (`None`, `torch.Tensor`) --
  Latent features corresponding to the predictions, compatible with the `latents` argument of the pipeline.
  The shape is `(numimages * numensemble) × (numtargets * 4) × latentheight × latentwidth`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Marigold Intrinsic Image Decomposition pipeline.




</div>

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_intrinsics</name><anchor>diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_intrinsics</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/marigold_image_processing.py#L549</source><parameters>[{"name": "prediction", "val": ": typing.Union[numpy.ndarray, torch.Tensor, typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "target_properties", "val": ": typing.Dict[str, typing.Any]"}, {"name": "color_map", "val": ": typing.Union[str, typing.Dict[str, str]] = 'binary'"}]</parameters><paramsdesc>- **prediction** (`Union[np.ndarray, torch.Tensor, List[np.ndarray], List[torch.Tensor]]`) --
  Intrinsic image decomposition.
- **target_properties** (`Dict[str, Any]`) --
  Decomposition properties. Expected entries: `target_names: List[str]` and a dictionary with keys
  `prediction_space: str`, `sub_target_names: List[Union[str, Null]]` (must have 3 entries, null for
  missing modalities), `up_to_scale: bool`, one for each target and sub-target.
- **color_map** (`Union[str, Dict[str, str]]`, *optional*, defaults to `"Spectral"`) --
  Color map used to convert a single-channel predictions into colored representations. When a dictionary
  is passed, each modality can be colored with its own color map.</paramsdesc><paramgroups>0</paramgroups></docstring>

Visualizes intrinsic image decomposition, such as predictions of the `MarigoldIntrinsicsPipeline`.



Returns: `List[Dict[str, PIL.Image.Image]]` with intrinsic image decomposition visualization.


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/marigold.md" />

### Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis
https://huggingface.co/docs/diffusers/main/api/pipelines/kolors.md

# Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/kolors/kolors_header_collage.png)

Kolors is a large-scale text-to-image generation model based on latent diffusion, developed by [the Kuaishou Kolors team](https://github.com/Kwai-Kolors/Kolors). Trained on billions of text-image pairs, Kolors exhibits significant advantages over both open-source and closed-source models in visual quality, complex semantic accuracy, and text rendering for both Chinese and English characters. Furthermore, Kolors supports both Chinese and English inputs, demonstrating strong performance in understanding and generating Chinese-specific content. For more details, please refer to this [technical report](https://github.com/Kwai-Kolors/Kolors/blob/master/imgs/Kolors_paper.pdf).

The abstract from the technical report is:

*We present Kolors, a latent diffusion model for text-to-image synthesis, characterized by its profound understanding of both English and Chinese, as well as an impressive degree of photorealism. There are three key insights contributing to the development of Kolors. Firstly, unlike large language model T5 used in Imagen and Stable Diffusion 3, Kolors is built upon the General Language Model (GLM), which enhances its comprehension capabilities in both English and Chinese. Moreover, we employ a multimodal large language model to recaption the extensive training dataset for fine-grained text understanding. These strategies significantly improve Kolors’ ability to comprehend intricate semantics, particularly those involving multiple entities, and enable its advanced text rendering capabilities. Secondly, we divide the training of Kolors into two phases: the concept learning phase with broad knowledge and the quality improvement phase with specifically curated high-aesthetic data. Furthermore, we investigate the critical role of the noise schedule and introduce a novel schedule to optimize high-resolution image generation. These strategies collectively enhance the visual appeal of the generated high-resolution images. Lastly, we propose a category-balanced benchmark KolorsPrompts, which serves as a guide for the training and evaluation of Kolors. Consequently, even when employing the commonly used U-Net backbone, Kolors has demonstrated remarkable performance in human evaluations, surpassing the existing open-source models and achieving Midjourney-v6 level performance, especially in terms of visual appeal. We will release the code and weights of Kolors at <https://github.com/Kwai-Kolors/Kolors>, and hope that it will benefit future research and applications in the visual generation community.*

## Usage Example

```python
import torch

from diffusers import DPMSolverMultistepScheduler, KolorsPipeline

pipe = KolorsPipeline.from_pretrained("Kwai-Kolors/Kolors-diffusers", torch_dtype=torch.float16, variant="fp16")
pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)

image = pipe(
    prompt='一张瓢虫的照片，微距，变焦，高质量，电影，拿着一个牌子，写着"可图"',
    negative_prompt="",
    guidance_scale=6.5,
    num_inference_steps=25,
).images[0]

image.save("kolors_sample.png")
```

### IP Adapter

Kolors needs a different IP Adapter to work, and it uses [Openai-CLIP-336](https://huggingface.co/openai/clip-vit-large-patch14-336) as an image encoder.

> [!TIP]
> Using an IP Adapter with Kolors requires more than 24GB of VRAM. To use it, we recommend using [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) on consumer GPUs.

> [!TIP]
> While Kolors is integrated in Diffusers, you need to load the image encoder from a revision to use the safetensor files. You can still use the main branch of the original repository if you're comfortable loading pickle checkpoints.

```python
import torch
from transformers import CLIPVisionModelWithProjection

from diffusers import DPMSolverMultistepScheduler, KolorsPipeline
from diffusers.utils import load_image

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "Kwai-Kolors/Kolors-IP-Adapter-Plus",
    subfolder="image_encoder",
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
    revision="refs/pr/4",
)

pipe = KolorsPipeline.from_pretrained(
    "Kwai-Kolors/Kolors-diffusers", image_encoder=image_encoder, torch_dtype=torch.float16, variant="fp16"
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)

pipe.load_ip_adapter(
    "Kwai-Kolors/Kolors-IP-Adapter-Plus",
    subfolder="",
    weight_name="ip_adapter_plus_general.safetensors",
    revision="refs/pr/4",
    image_encoder_folder=None,
)
pipe.enable_model_cpu_offload()

ipa_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/kolors/cat_square.png")

image = pipe(
    prompt="best quality, high quality",
    negative_prompt="",
    guidance_scale=6.5,
    num_inference_steps=25,
    ip_adapter_image=ipa_image,
).images[0]

image.save("kolors_ipa_sample.png")
```

## KolorsPipeline[[diffusers.KolorsPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KolorsPipeline</name><anchor>diffusers.KolorsPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors.py#L124</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": ChatGLMModel"}, {"name": "tokenizer", "val": ": ChatGLMTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = False"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`ChatGLMModel`) --
  Frozen text-encoder. Kolors uses [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b).
- **tokenizer** (`ChatGLMTokenizer`) --
  Tokenizer of class
  [ChatGLMTokenizer](https://huggingface.co/THUDM/chatglm3-6b/blob/main/tokenization_chatglm.py).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"False"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `Kwai-Kolors/Kolors-diffusers`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kolors.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.KolorsPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors.py#L200</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.KolorsPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors.py#L601</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

- all
- __call__

## KolorsImg2ImgPipeline[[diffusers.KolorsImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KolorsImg2ImgPipeline</name><anchor>diffusers.KolorsImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors_img2img.py#L143</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": ChatGLMModel"}, {"name": "tokenizer", "val": ": ChatGLMTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = False"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`ChatGLMModel`) --
  Frozen text-encoder. Kolors uses [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b).
- **tokenizer** (`ChatGLMTokenizer`) --
  Tokenizer of class
  [ChatGLMTokenizer](https://huggingface.co/THUDM/chatglm3-6b/blob/main/tokenization_chatglm.py).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"False"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `Kwai-Kolors/Kolors-diffusers`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Kolors.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.KolorsImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors_img2img.py#L220</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.KolorsImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kolors/pipeline_kolors_img2img.py#L729</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

- all
- __call__



<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kolors.md" />

### Flux
https://huggingface.co/docs/diffusers/main/api/pipelines/flux.md

# Flux

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

Flux is a series of text-to-image generation models based on diffusion transformers. To know more about Flux, check out the original [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/) by the creators of Flux, Black Forest Labs.

Original model checkpoints for Flux can be found [here](https://huggingface.co/black-forest-labs). Original inference code can be found [here](https://github.com/black-forest-labs/flux).

> [!TIP]
> Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to [this blog post](https://huggingface.co/blog/quanto-diffusers) to learn more.  For an exhaustive list of resources, check out [this gist](https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c).
>
> [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.

Flux comes in the following variants:

| model type | model id |
|:----------:|:--------:|
| Timestep-distilled | [`black-forest-labs/FLUX.1-schnell`](https://huggingface.co/black-forest-labs/FLUX.1-schnell) |
| Guidance-distilled | [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev) |
| Fill Inpainting/Outpainting (Guidance-distilled) | [`black-forest-labs/FLUX.1-Fill-dev`](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev) |
| Canny Control (Guidance-distilled) | [`black-forest-labs/FLUX.1-Canny-dev`](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev) |
| Depth Control (Guidance-distilled) | [`black-forest-labs/FLUX.1-Depth-dev`](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev) |
| Canny Control (LoRA) | [`black-forest-labs/FLUX.1-Canny-dev-lora`](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora) |
| Depth Control (LoRA) | [`black-forest-labs/FLUX.1-Depth-dev-lora`](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev-lora) |
| Redux (Adapter) | [`black-forest-labs/FLUX.1-Redux-dev`](https://huggingface.co/black-forest-labs/FLUX.1-Redux-dev) |
| Kontext | [`black-forest-labs/FLUX.1-kontext`](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev) |

All checkpoints have different usage which we detail below.

### Timestep-distilled

* `max_sequence_length` cannot be more than 256.
* `guidance_scale` needs to be 0.
* As this is a timestep-distilled model, it benefits from fewer sampling steps.

```python
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = "A cat holding a sign that says hello world"
out = pipe(
    prompt=prompt,
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
out.save("image.png")
```

### Guidance-distilled

* The guidance-distilled variant takes about 50 sampling steps for good-quality generation.
* It doesn't have any limitations around the `max_sequence_length`.

```python
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

prompt = "a tiny astronaut hatching from an egg on the moon"
out = pipe(
    prompt=prompt,
    guidance_scale=3.5,
    height=768,
    width=1360,
    num_inference_steps=50,
).images[0]
out.save("image.png")
```

### Fill Inpainting/Outpainting

* Flux Fill pipeline does not require `strength` as an input like regular inpainting pipelines.
* It supports both inpainting and outpainting.

```python
import torch
from diffusers import FluxFillPipeline
from diffusers.utils import load_image

image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/cup.png")
mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/cup_mask.png")

repo_id = "black-forest-labs/FLUX.1-Fill-dev"
pipe = FluxFillPipeline.from_pretrained(repo_id, torch_dtype=torch.bfloat16).to("cuda")

image = pipe(
    prompt="a white paper cup",
    image=image,
    mask_image=mask,
    height=1632,
    width=1232,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save(f"output.png")
```

### Canny Control

**Note:** `black-forest-labs/Flux.1-Canny-dev` is _not_ a [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) model. ControlNet models are a separate component from the UNet/Transformer whose residuals are added to the actual underlying model. Canny Control is an alternate architecture that achieves effectively the same results as a ControlNet model would, by using channel-wise concatenation with input control condition and ensuring the transformer learns structure control by following the condition as closely as possible. 

```python
# !pip install -U controlnet-aux
import torch
from controlnet_aux import CannyDetector
from diffusers import FluxControlPipeline
from diffusers.utils import load_image

pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-Canny-dev", torch_dtype=torch.bfloat16).to("cuda")

prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = CannyDetector()
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)

image = pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=50,
    guidance_scale=30.0,
).images[0]
image.save("output.png")
```

Canny Control is also possible with a LoRA variant of this condition. The usage is as follows:

```python
# !pip install -U controlnet-aux
import torch
from controlnet_aux import CannyDetector
from diffusers import FluxControlPipeline
from diffusers.utils import load_image

pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("black-forest-labs/FLUX.1-Canny-dev-lora")

prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = CannyDetector()
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)

image = pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=50,
    guidance_scale=30.0,
).images[0]
image.save("output.png")
```

### Depth Control

**Note:** `black-forest-labs/Flux.1-Depth-dev` is _not_ a ControlNet model. [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) models are a separate component from the UNet/Transformer whose residuals are added to the actual underlying model. Depth Control is an alternate architecture that achieves effectively the same results as a ControlNet model would, by using channel-wise concatenation with input control condition and ensuring the transformer learns structure control by following the condition as closely as possible.

```python
# !pip install git+https://github.com/huggingface/image_gen_aux
import torch
from diffusers import FluxControlPipeline, FluxTransformer2DModel
from diffusers.utils import load_image
from image_gen_aux import DepthPreprocessor

pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-Depth-dev", torch_dtype=torch.bfloat16).to("cuda")

prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")

image = pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=30,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```

Depth Control is also possible with a LoRA variant of this condition. The usage is as follows:

```python
# !pip install git+https://github.com/huggingface/image_gen_aux
import torch
from diffusers import FluxControlPipeline, FluxTransformer2DModel
from diffusers.utils import load_image
from image_gen_aux import DepthPreprocessor

pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")

prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")

image = pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=30,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```

### Redux

* Flux Redux pipeline is an adapter for FLUX.1 base models. It can be used with both flux-dev and flux-schnell, for image-to-image generation.
* You can first use the `FluxPriorReduxPipeline` to get the `prompt_embeds` and `pooled_prompt_embeds`, and then feed them into the `FluxPipeline` for image-to-image generation.
* When use `FluxPriorReduxPipeline` with a base pipeline, you can set `text_encoder=None` and `text_encoder_2=None` in the base pipeline, in order to save VRAM.

```python
import torch
from diffusers import FluxPriorReduxPipeline, FluxPipeline
from diffusers.utils import load_image
device = "cuda"
dtype = torch.bfloat16


repo_redux = "black-forest-labs/FLUX.1-Redux-dev"
repo_base = "black-forest-labs/FLUX.1-dev" 
pipe_prior_redux = FluxPriorReduxPipeline.from_pretrained(repo_redux, torch_dtype=dtype).to(device)
pipe = FluxPipeline.from_pretrained(
    repo_base, 
    text_encoder=None,
    text_encoder_2=None,
    torch_dtype=torch.bfloat16
).to(device)

image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy/img5.png")
pipe_prior_output = pipe_prior_redux(image)
images = pipe(
    guidance_scale=2.5,
    num_inference_steps=50,
    generator=torch.Generator("cpu").manual_seed(0),
    **pipe_prior_output,
).images
images[0].save("flux-redux.png")
```

### Kontext

Flux Kontext is a model that allows in-context control of the image generation process, allowing for editing, refinement, relighting, style transfer, character customization, and more.

```python
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image

pipe = FluxKontextPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)
pipe.to("cuda")

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/yarn-art-pikachu.png").convert("RGB")
prompt = "Make Pikachu hold a sign that says 'Black Forest Labs is awesome', yarn art style, detailed, vibrant colors"
image = pipe(
    image=image,
    prompt=prompt,
    guidance_scale=2.5,
    generator=torch.Generator().manual_seed(42),
).images[0]
image.save("flux-kontext.png")
```

Flux Kontext comes with an integrity safety checker, which should be run after the image generation step. To run the safety checker, install the official repository from [black-forest-labs/flux](https://github.com/black-forest-labs/flux) and add the following code:

```python
from flux.content_filters import PixtralContentFilter

# ... pipeline invocation to generate images

integrity_checker = PixtralContentFilter(torch.device("cuda"))
image_ = np.array(image) / 255.0
image_ = 2 * image_ - 1
image_ = torch.from_numpy(image_).to("cuda", dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2)
if integrity_checker.test_image(image_):
    raise ValueError("Your image has been flagged. Choose another prompt/image or try again.")
```

### Kontext Inpainting
`FluxKontextInpaintPipeline` enables image modification within a fixed mask region. It currently supports both text-based conditioning and image-reference conditioning.
<hfoptions id="kontext-inpaint">
<hfoption id="text-only">


```python
import torch
from diffusers import FluxKontextInpaintPipeline
from diffusers.utils import load_image

prompt = "Change the yellow dinosaur to green one"
img_url = (
    "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/dinosaur_input.jpeg?raw=true"
)
mask_url = (
    "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/dinosaur_mask.png?raw=true"
)

source = load_image(img_url)
mask = load_image(mask_url)

pipe = FluxKontextInpaintPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)
pipe.to("cuda")

image = pipe(prompt=prompt, image=source, mask_image=mask, strength=1.0).images[0]
image.save("kontext_inpainting_normal.png")
```
</hfoption>
<hfoption id="image conditioning">

```python
import torch
from diffusers import FluxKontextInpaintPipeline
from diffusers.utils import load_image

pipe = FluxKontextInpaintPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt = "Replace this ball"
img_url = "https://images.pexels.com/photos/39362/the-ball-stadion-football-the-pitch-39362.jpeg?auto=compress&cs=tinysrgb&dpr=1&w=500"
mask_url = "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/ball_mask.png?raw=true"
image_reference_url = "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTah3x6OL_ECMBaZ5ZlJJhNsyC-OSMLWAI-xw&s"

source = load_image(img_url)
mask = load_image(mask_url)
image_reference = load_image(image_reference_url)

mask = pipe.mask_processor.blur(mask, blur_factor=12)
image = pipe(
    prompt=prompt, image=source, mask_image=mask, image_reference=image_reference, strength=1.0
).images[0]
image.save("kontext_inpainting_ref.png")
```
</hfoption>
</hfoptions>

## Combining Flux Turbo LoRAs with Flux Control, Fill, and Redux

We can combine Flux Turbo LoRAs with Flux Control and other pipelines like Fill and Redux to enable few-steps' inference. The example below shows how to do that for Flux Control LoRA for depth and turbo LoRA from [`ByteDance/Hyper-SD`](https://hf.co/ByteDance/Hyper-SD).

```py
from diffusers import FluxControlPipeline
from image_gen_aux import DepthPreprocessor
from diffusers.utils import load_image
from huggingface_hub import hf_hub_download
import torch

control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
control_pipe.load_lora_weights(
    hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
)
control_pipe.set_adapters(["depth", "hyper-sd"], adapter_weights=[0.85, 0.125])
control_pipe.enable_model_cpu_offload()

prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")

processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")

image = control_pipe(
    prompt=prompt,
    control_image=control_image,
    height=1024,
    width=1024,
    num_inference_steps=8,
    guidance_scale=10.0,
    generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png")
```

## Note about `unload_lora_weights()` when using Flux LoRAs

When unloading the Control LoRA weights, call `pipe.unload_lora_weights(reset_to_overwritten_params=True)` to reset the `pipe.transformer` completely back to its original form. The resultant pipeline can then be used with methods like [DiffusionPipeline.from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe). More details about this argument are available in [this PR](https://github.com/huggingface/diffusers/pull/10397).

## IP-Adapter

> [!TIP]
> Check out [IP-Adapter](../../using-diffusers/ip_adapter) to learn more about how IP-Adapters work.

An IP-Adapter lets you prompt Flux with images, in addition to the text prompt. This is especially useful when describing complex concepts that are difficult to articulate through text alone and you have reference images.

```python
import torch
from diffusers import FluxPipeline
from diffusers.utils import load_image

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
).to("cuda")

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_input.jpg").resize((1024, 1024))

pipe.load_ip_adapter(
    "XLabs-AI/flux-ip-adapter",
    weight_name="ip_adapter.safetensors",
    image_encoder_pretrained_model_name_or_path="openai/clip-vit-large-patch14"
)
pipe.set_ip_adapter_scale(1.0)

image = pipe(
    width=1024,
    height=1024,
    prompt="wearing sunglasses",
    negative_prompt="",
    true_cfg_scale=4.0,
    generator=torch.Generator().manual_seed(4444),
    ip_adapter_image=image,
).images[0]

image.save('flux_ip_adapter_output.jpg')
```

<div class="justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_output.jpg"/>
    <figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "wearing sunglasses"</figcaption>
</div>

## Optimize

Flux is a very large model and requires ~50GB of RAM/VRAM to load all the modeling components. Enable some of the optimizations below to lower the memory requirements.

### Group offloading

[Group offloading](../../optimization/memory#group-offloading) lowers VRAM usage by offloading groups of internal layers rather than the whole model or weights. You need to use [apply_group_offloading()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_group_offloading) on all the model components of a pipeline. The `offload_type` parameter allows you to toggle between block and leaf-level offloading. Setting it to `leaf_level` offloads the lowest leaf-level parameters to the CPU instead of offloading at the module-level.

On CUDA devices that support asynchronous data streaming, set `use_stream=True` to overlap data transfer and computation to accelerate inference.

> [!TIP]
> It is possible to mix block and leaf-level offloading for different components in a pipeline.

```py
import torch
from diffusers import FluxPipeline
from diffusers.hooks import apply_group_offloading

model_id = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
pipe = FluxPipeline.from_pretrained(
	model_id,
	torch_dtype=dtype,
)

apply_group_offloading(
    pipe.transformer,
    offload_type="leaf_level",
    offload_device=torch.device("cpu"),
    onload_device=torch.device("cuda"),
    use_stream=True,
)
apply_group_offloading(
    pipe.text_encoder, 
    offload_device=torch.device("cpu"),
    onload_device=torch.device("cuda"),
    offload_type="leaf_level",
    use_stream=True,
)
apply_group_offloading(
    pipe.text_encoder_2, 
    offload_device=torch.device("cpu"),
    onload_device=torch.device("cuda"),
    offload_type="leaf_level",
    use_stream=True,
)
apply_group_offloading(
    pipe.vae, 
    offload_device=torch.device("cpu"),
    onload_device=torch.device("cuda"),
    offload_type="leaf_level",
    use_stream=True,
)

prompt="A cat wearing sunglasses and working as a lifeguard at pool."

generator = torch.Generator().manual_seed(181201)
image = pipe(
    prompt,
    width=576,
    height=1024,
    num_inference_steps=30,
    generator=generator
).images[0]
image
```

### Running FP16 inference

Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.

FP16 inference code:
```python
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16) # can replace schnell with dev
# to run on low vram GPUs (i.e. between 4 and 32 GB VRAM)
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()

pipe.to(torch.float16) # casting here instead of in the pipeline constructor because doing so in the constructor loads all models into CPU memory at once

prompt = "A cat holding a sign that says hello world"
out = pipe(
    prompt=prompt,
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
out.save("image.png")
```

### Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [FluxPipeline](/docs/diffusers/main/en/api/pipelines/flux#diffusers.FluxPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = FluxTransformer2DModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    text_encoder_2=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt, guidance_scale=3.5, height=768, width=1360, num_inference_steps=50).images[0]
image.save("flux.png")
```

## Single File Loading for the `FluxTransformer2DModel`

The `FluxTransformer2DModel` supports loading checkpoints in the original format shipped by Black Forest Labs. This is also useful when trying to load finetunes or quantized versions of the models that have been published by the community.

> [!TIP]
> `FP8` inference can be brittle depending on the GPU type, CUDA version, and `torch` version that you are using. It is recommended that you use the `optimum-quanto` library in order to run FP8 inference on your machine.

The following example demonstrates how to run Flux with less than 16GB of VRAM.

First install `optimum-quanto`

```shell
pip install optimum-quanto
```

Then run the following example

```python
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.quanto import freeze, qfloat8, quantize

bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16

transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)

text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)

pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2

pipe.enable_model_cpu_offload()

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    guidance_scale=3.5,
    output_type="pil",
    num_inference_steps=20,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]

image.save("flux-fp8-dev.png")
```

## FluxPipeline[[diffusers.FluxPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxPipeline</name><anchor>diffusers.FluxPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L147</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for text-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L652</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  True classifier-free guidance (guidance scale) is enabled when `true_cfg_scale` > 1 and
  `negative_prompt` is provided.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxPipeline

>>> pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
>>> image.save("flux.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L557</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L584</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L544</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L570</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py#L311</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxImg2ImgPipeline[[diffusers.FluxImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxImg2ImgPipeline</name><anchor>diffusers.FluxImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L170</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for image inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L734</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch

>>> from diffusers import FluxImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> device = "cuda"
>>> pipe = FluxImg2ImgPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> init_image = load_image(url).resize((1024, 1024))

>>> prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"

>>> images = pipe(
...     prompt=prompt, image=init_image, num_inference_steps=4, strength=0.95, guidance_scale=0.0
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L626</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxImg2ImgPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L655</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L612</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxImg2ImgPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L640</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_img2img.py#L334</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxInpaintPipeline[[diffusers.FluxInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxInpaintPipeline</name><anchor>diffusers.FluxInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py#L166</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for image inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py#L775</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will be generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxInpaintPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image = pipe(prompt=prompt, image=source, mask_image=mask).images[0]
>>> image.save("flux_inpainting.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py#L337</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxControlNetInpaintPipeline[[diffusers.FluxControlNetInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlNetInpaintPipeline</name><anchor>diffusers.FluxControlNetInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py#L174</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel, typing.List[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], diffusers.models.controlnets.controlnet_flux.FluxMultiControlNetModel]"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux controlnet pipeline for inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlNetInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py#L738</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`.
- **image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`) --
  The image(s) to inpaint.
- **mask_image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`) --
  The mask image(s) to use for inpainting. White pixels in the mask will be repainted, while black pixels
  will be preserved.
- **masked_image_latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated masked image latents.
- **control_image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`) --
  The ControlNet input condition. Image to control the generation.
- **height** (`int`, *optional*, defaults to self.default_sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.default_sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **strength** (`float`, *optional*, defaults to 0.6) --
  Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1.
- **padding_mask_crop** (`int`, *optional*) --
  The size of the padding to use when cropping the mask.
- **num_inference_steps** (`int`, *optional*, defaults to 28) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598).
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_mode** (`int` or `List[int]`, *optional*) --
  The mode for the ControlNet. If multiple ControlNets are used, this should be a list.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original transformer.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or more [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to
  make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  Additional keyword arguments to be passed to the joint attention mechanism.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising step during the inference.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function.
- **max_sequence_length** (`int`, *optional*, defaults to 512) --
  The maximum length of the sequence to be generated.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlNetInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxControlNetInpaintPipeline
>>> from diffusers.models import FluxControlNetModel
>>> from diffusers.utils import load_image

>>> controlnet = FluxControlNetModel.from_pretrained(
...     "InstantX/FLUX.1-dev-controlnet-canny", torch_dtype=torch.float16
... )
>>> pipe = FluxControlNetInpaintPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-schnell", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> control_image = load_image(
...     "https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny-alpha/resolve/main/canny.jpg"
... )
>>> init_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
... )
>>> mask_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
... )

>>> prompt = "A girl holding a sign that says InstantX"
>>> image = pipe(
...     prompt,
...     image=init_image,
...     mask_image=mask_image,
...     control_image=control_image,
...     control_guidance_start=0.2,
...     control_guidance_end=0.8,
...     controlnet_conditioning_scale=0.7,
...     strength=0.7,
...     num_inference_steps=28,
...     guidance_scale=3.5,
... ).images[0]
>>> image.save("flux_controlnet_inpaint.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlNetInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py#L346</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxControlNetImg2ImgPipeline[[diffusers.FluxControlNetImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlNetImg2ImgPipeline</name><anchor>diffusers.FluxControlNetImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py#L172</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel, typing.List[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], diffusers.models.controlnets.controlnet_flux.FluxMultiControlNetModel]"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux controlnet pipeline for image-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlNetImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py#L634</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`.
- **image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`) --
  The image(s) to modify with the pipeline.
- **control_image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`) --
  The ControlNet input condition. Image to control the generation.
- **height** (`int`, *optional*, defaults to self.default_sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.default_sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **strength** (`float`, *optional*, defaults to 0.6) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
- **num_inference_steps** (`int`, *optional*, defaults to 28) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598).
- **control_mode** (`int` or `List[int]`, *optional*) --
  The mode for the ControlNet. If multiple ControlNets are used, this should be a list.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original transformer.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or more [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to
  make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  Additional keyword arguments to be passed to the joint attention mechanism.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising step during the inference.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function.
- **max_sequence_length** (`int`, *optional*, defaults to 512) --
  The maximum length of the sequence to be generated.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlNetImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetModel
>>> from diffusers.utils import load_image

>>> device = "cuda" if torch.cuda.is_available() else "cpu"

>>> controlnet = FluxControlNetModel.from_pretrained(
...     "InstantX/FLUX.1-dev-Controlnet-Canny-alpha", torch_dtype=torch.bfloat16
... )

>>> pipe = FluxControlNetImg2ImgPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-schnell", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> pipe.text_encoder.to(torch.float16)
>>> pipe.controlnet.to(torch.float16)
>>> pipe.to("cuda")

>>> control_image = load_image("https://huggingface.co/InstantX/SD3-Controlnet-Canny/resolve/main/canny.jpg")
>>> init_image = load_image(
...     "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
... )

>>> prompt = "A girl in city, 25 years old, cool, futuristic"
>>> image = pipe(
...     prompt,
...     image=init_image,
...     control_image=control_image,
...     control_guidance_start=0.2,
...     control_guidance_end=0.8,
...     controlnet_conditioning_scale=1.0,
...     strength=0.7,
...     num_inference_steps=2,
...     guidance_scale=3.5,
... ).images[0]
>>> image.save("flux_controlnet_img2img.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlNetImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py#L335</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxControlPipeline[[diffusers.FluxControlPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlPipeline</name><anchor>diffusers.FluxControlPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L160</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for controllable text-to-image generation with image conditions.

Reference: https://bfl.ai/flux-1-tools





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L635</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guidance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with prompt at the expense of lower image quality.

  Guidance-distilled models approximates true classifier-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from controlnet_aux import CannyDetector
>>> from diffusers import FluxControlPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxControlPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-Canny-dev", torch_dtype=torch.bfloat16
... ).to("cuda")

>>> prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
>>> control_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png"
... )

>>> processor = CannyDetector()
>>> control_image = processor(
...     control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024
... )

>>> image = pipe(
...     prompt=prompt,
...     control_image=control_image,
...     height=1024,
...     width=1024,
...     num_inference_steps=50,
...     guidance_scale=30.0,
... ).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxControlPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L508</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxControlPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L535</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxControlPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L495</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxControlPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L521</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control.py#L325</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxControlImg2ImgPipeline[[diffusers.FluxControlImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlImg2ImgPipeline</name><anchor>diffusers.FluxControlImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_img2img.py#L178</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for image inpainting.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_img2img.py#L634</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.6"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from controlnet_aux import CannyDetector
>>> from diffusers import FluxControlImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxControlImg2ImgPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-Canny-dev", torch_dtype=torch.bfloat16
... ).to("cuda")

>>> prompt = "A robot made of exotic candies and chocolates of different kinds. Abstract background"
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/watercolor-painting.jpg"
... )
>>> control_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png"
... )

>>> processor = CannyDetector()
>>> control_image = processor(
...     control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024
... )

>>> image = pipe(
...     prompt=prompt,
...     image=image,
...     control_image=control_image,
...     strength=0.8,
...     height=1024,
...     width=1024,
...     num_inference_steps=50,
...     guidance_scale=30.0,
... ).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_control_img2img.py#L335</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxPriorReduxPipeline[[diffusers.FluxPriorReduxPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxPriorReduxPipeline</name><anchor>diffusers.FluxPriorReduxPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_prior_redux.py#L84</source><parameters>[{"name": "image_encoder", "val": ": SiglipVisionModel"}, {"name": "feature_extractor", "val": ": SiglipImageProcessor"}, {"name": "image_embedder", "val": ": ReduxImageEncoder"}, {"name": "text_encoder", "val": ": CLIPTextModel = None"}, {"name": "tokenizer", "val": ": CLIPTokenizer = None"}, {"name": "text_encoder_2", "val": ": T5EncoderModel = None"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast = None"}]</parameters><paramsdesc>- **image_encoder** (`SiglipVisionModel`) --
  SIGLIP vision model to encode the input image.
- **feature_extractor** (`SiglipImageProcessor`) --
  Image processor for preprocessing images for the SIGLIP model.
- **image_embedder** (`ReduxImageEncoder`) --
  Redux image encoder to process the SIGLIP embeddings.
- **text_encoder** (`CLIPTextModel`, *optional*) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`, *optional*) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`, *optional*) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`, *optional*) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux Redux pipeline for image-to-image generation.

Reference: https://blackforestlabs.ai/flux-1-tools/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxPriorReduxPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_prior_redux.py#L371</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = 1.0"}, {"name": "pooled_prompt_embeds_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = 1.0"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. **experimental feature**: to use this feature,
  make sure to explicitly load text encoders to the pipeline. Prompts will be ignored if text encoders
  are not loaded.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPriorReduxPipelineOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPriorReduxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPriorReduxPipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxPriorReduxPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxPriorReduxPipeline, FluxPipeline
>>> from diffusers.utils import load_image

>>> device = "cuda"
>>> dtype = torch.bfloat16

>>> repo_redux = "black-forest-labs/FLUX.1-Redux-dev"
>>> repo_base = "black-forest-labs/FLUX.1-dev"
>>> pipe_prior_redux = FluxPriorReduxPipeline.from_pretrained(repo_redux, torch_dtype=dtype).to(device)
>>> pipe = FluxPipeline.from_pretrained(
...     repo_base, text_encoder=None, text_encoder_2=None, torch_dtype=torch.bfloat16
... ).to(device)

>>> image = load_image(
...     "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy/img5.png"
... )
>>> pipe_prior_output = pipe_prior_redux(image)
>>> images = pipe(
...     guidance_scale=2.5,
...     num_inference_steps=50,
...     generator=torch.Generator("cpu").manual_seed(0),
...     **pipe_prior_output,
... ).images
>>> images[0].save("flux-redux.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxPriorReduxPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_prior_redux.py#L292</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxFillPipeline[[diffusers.FluxFillPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxFillPipeline</name><anchor>diffusers.FluxFillPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L168</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux Fill pipeline for image inpainting/outpainting.

Reference: https://blackforestlabs.ai/flux-1-tools/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxFillPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L752</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "mask_image", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "masked_image_latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 30.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **mask_image_latent** (`torch.Tensor`, `List[torch.Tensor]`) --
  `Tensor` representing an image batch to mask `image` generated by VAE. If not provided, the mask
  latents tensor will be generated by `mask_image`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 30.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxFillPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxFillPipeline
>>> from diffusers.utils import load_image

>>> image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/cup.png")
>>> mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/cup_mask.png")

>>> pipe = FluxFillPipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16)
>>> pipe.enable_model_cpu_offload()  # save some VRAM by offloading the model to CPU

>>> image = pipe(
...     prompt="a white paper cup",
...     image=image,
...     mask_image=mask,
...     height=1632,
...     width=1232,
...     guidance_scale=30,
...     num_inference_steps=50,
...     max_sequence_length=512,
...     generator=torch.Generator("cpu").manual_seed(0),
... ).images[0]
>>> image.save("flux_fill.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxFillPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L645</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxFillPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L672</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxFillPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L632</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxFillPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L658</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxFillPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_fill.py#L420</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxKontextPipeline[[diffusers.FluxKontextPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxKontextPipeline</name><anchor>diffusers.FluxKontextPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L191</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux Kontext pipeline for image-to-image and text-to-image generation.

Reference: https://bfl.ai/announcements/flux-1-kontext-dev





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxKontextPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L751</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "max_area", "val": ": int = 1048576"}, {"name": "_auto_resize", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  When > 1.0 and a provided `negative_prompt`, enables true classifier-free guidance.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guidance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with prompt at the expense of lower image quality.

  Guidance-distilled models approximates true classifier-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) --
  Maximum sequence length to use with the `prompt`.
- **max_area** (`int`, defaults to `1024 ** 2`) --
  The maximum area of the generated image in pixels. The height and width will be adjusted to fit this
  area while maintaining the aspect ratio.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxKontextPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import FluxKontextPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxKontextPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/yarn-art-pikachu.png"
... ).convert("RGB")
>>> prompt = "Make Pikachu hold a sign that says 'Black Forest Labs is awesome', yarn art style, detailed, vibrant colors"
>>> image = pipe(
...     image=image,
...     prompt=prompt,
...     guidance_scale=2.5,
...     generator=torch.Generator().manual_seed(42),
... ).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxKontextPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L627</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxKontextPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L656</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxKontextPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L613</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxKontextPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L641</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxKontextPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext.py#L359</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxKontextInpaintPipeline[[diffusers.FluxKontextInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxKontextInpaintPipeline</name><anchor>diffusers.FluxKontextInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L215</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux Kontext pipeline for text-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxKontextInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L940</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "image_reference", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "max_area", "val": ": int = 1048576"}, {"name": "_auto_resize", "val": ": bool = True"}]</parameters><paramsdesc>- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be be inpainted (which parts of the image
  to be masked out with `mask_image` and repainted according to `prompt` and `image_reference`). For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **image_reference** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point for the
  masked area. For both numpy array and pytorch tensor, the expected value range is between `[0, 1]` If
  it's a tensor or a list or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)` If it is
  a numpy array or a list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can
  also accept image latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  True classifier-free guidance (guidance scale) is enabled when `true_cfg_scale` > 1 and
  `negative_prompt` is provided.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guidance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifier-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) --
  Maximum sequence length to use with the `prompt`.
- **max_area** (`int`, defaults to `1024 ** 2`) --
  The maximum area of the generated image in pixels. The height and width will be adjusted to fit this
  area while maintaining the aspect ratio.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



Examples:
# Inpainting with text only
<ExampleCodeBlock anchor="diffusers.FluxKontextInpaintPipeline.__call__.example">

```py
>>> import torch
>>> from diffusers import FluxKontextInpaintPipeline
>>> from diffusers.utils import load_image

>>> prompt = "Change the yellow dinosaur to green one"
>>> img_url = (
...     "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/dinosaur_input.jpeg?raw=true"
... )
>>> mask_url = (
...     "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/dinosaur_mask.png?raw=true"
... )

>>> source = load_image(img_url)
>>> mask = load_image(mask_url)

>>> pipe = FluxKontextInpaintPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = pipe(prompt=prompt, image=source, mask_image=mask, strength=1.0).images[0]
>>> image.save("kontext_inpainting_normal.png")
```

</ExampleCodeBlock>

# Inpainting with image conditioning
<ExampleCodeBlock anchor="diffusers.FluxKontextInpaintPipeline.__call__.example-2">

```py
>>> import torch
>>> from diffusers import FluxKontextInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = FluxKontextInpaintPipeline.from_pretrained(
...     "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> prompt = "Replace this ball"
>>> img_url = "https://images.pexels.com/photos/39362/the-ball-stadion-football-the-pitch-39362.jpeg?auto=compress&cs=tinysrgb&dpr=1&w=500"
>>> mask_url = (
...     "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/ball_mask.png?raw=true"
... )
>>> image_reference_url = (
...     "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTah3x6OL_ECMBaZ5ZlJJhNsyC-OSMLWAI-xw&s"
... )

>>> source = load_image(img_url)
>>> mask = load_image(mask_url)
>>> image_reference = load_image(image_reference_url)

>>> mask = pipe.mask_processor.blur(mask, blur_factor=12)
>>> image = pipe(
...     prompt=prompt, image=source, mask_image=mask, image_reference=image_reference, strength=1.0
... ).images[0]
>>> image.save("kontext_inpainting_ref.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.FluxKontextInpaintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L701</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.FluxKontextInpaintPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L730</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.FluxKontextInpaintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L687</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.FluxKontextInpaintPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L715</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxKontextInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_kontext_inpaint.py#L392</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/flux.md" />

### Allegro
https://huggingface.co/docs/diffusers/main/api/pipelines/allegro.md

# Allegro

[Allegro: Open the Black Box of Commercial-Level Video Generation Model](https://huggingface.co/papers/2410.15458) from RhymesAI, by Yuan Zhou, Qiuyue Wang, Yuxuan Cai, Huan Yang.

The abstract from the paper is:

*Significant advancements have been made in the field of video generation, with the open-source community contributing a wealth of research papers and tools for training high-quality models. However, despite these efforts, the available information and resources remain insufficient for achieving commercial-level performance. In this report, we open the black box and introduce Allegro, an advanced video generation model that excels in both quality and temporal consistency. We also highlight the current limitations in the field and present a comprehensive methodology for training high-performance, commercial-level video generation models, addressing key aspects such as data, model architecture, training pipeline, and evaluation. Our user study shows that Allegro surpasses existing open-source models and most commercial models, ranking just behind Hailuo and Kling. Code: https://github.com/rhymes-ai/Allegro , Model: https://huggingface.co/rhymes-ai/Allegro , Gallery: https://rhymes.ai/allegro_gallery .*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [AllegroPipeline](/docs/diffusers/main/en/api/pipelines/allegro#diffusers.AllegroPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, AllegroTransformer3DModel, AllegroPipeline
from diffusers.utils import export_to_video
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "rhymes-ai/Allegro",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = AllegroTransformer3DModel.from_pretrained(
    "rhymes-ai/Allegro",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = AllegroPipeline.from_pretrained(
    "rhymes-ai/Allegro",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = (
    "A seaside harbor with bright sunlight and sparkling seawater, with many boats in the water. From an aerial view, "
    "the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this "
    "location might be a popular spot for docking fishing boats."
)
video = pipeline(prompt, guidance_scale=7.5, max_sequence_length=512).frames[0]
export_to_video(video, "harbor.mp4", fps=15)
```

## AllegroPipeline[[diffusers.AllegroPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AllegroPipeline</name><anchor>diffusers.AllegroPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L144</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLAllegro"}, {"name": "transformer", "val": ": AllegroTransformer3DModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters><paramsdesc>- **vae** (`AllegroAutoEncoderKL3D`) --
  Variational Auto-Encoder (VAE) Model to encode and decode video to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. PixArt-Alpha uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([AllegroTransformer3DModel](/docs/diffusers/main/en/api/models/allegro_transformer3d#diffusers.AllegroTransformer3DModel)) --
  A text conditioned `AllegroTransformer3DModel` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded video latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using Allegro.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AllegroPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L718</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 100"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "num_frames", "val": ": typing.Optional[int] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the video generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the video generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate videos that are closely linked to
  the text `prompt`, usually at the expense of lower video quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **num_frames** -- (`int`, *optional*, defaults to 88):
  The number controls the generated video frames.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated video.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate video. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **max_sequence_length** (`int` defaults to `512`) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>[AllegroPipelineOutput](/docs/diffusers/main/en/api/pipelines/allegro#diffusers.pipelines.allegro.pipeline_output.AllegroPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AllegroPipelineOutput](/docs/diffusers/main/en/api/pipelines/allegro#diffusers.pipelines.allegro.pipeline_output.AllegroPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated videos.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AllegroPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import AutoencoderKLAllegro, AllegroPipeline
>>> from diffusers.utils import export_to_video

>>> vae = AutoencoderKLAllegro.from_pretrained("rhymes-ai/Allegro", subfolder="vae", torch_dtype=torch.float32)
>>> pipe = AllegroPipeline.from_pretrained("rhymes-ai/Allegro", vae=vae, torch_dtype=torch.bfloat16).to("cuda")
>>> pipe.enable_vae_tiling()

>>> prompt = (
...     "A seaside harbor with bright sunlight and sparkling seawater, with many boats in the water. From an aerial view, "
...     "the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this "
...     "location might be a popular spot for docking fishing boats."
... )
>>> video = pipe(prompt, guidance_scale=7.5, max_sequence_length=512).frames[0]
>>> export_to_video(video, "output.mp4", fps=15)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.AllegroPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L662</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.AllegroPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L689</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.AllegroPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L649</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.AllegroPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L675</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AllegroPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_allegro.py#L215</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Alpha, it's should be the embeddings of the ""
  string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 512) -- Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AllegroPipelineOutput[[diffusers.pipelines.allegro.pipeline_output.AllegroPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.allegro.pipeline_output.AllegroPipelineOutput</name><anchor>diffusers.pipelines.allegro.pipeline_output.AllegroPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/allegro/pipeline_output.py#L12</source><parameters>[{"name": "frames", "val": ": typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]]"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Allegro pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/allegro.md" />

### Consistency Models
https://huggingface.co/docs/diffusers/main/api/pipelines/consistency_models.md

# Consistency Models

Consistency Models were proposed in [Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.

The abstract from the paper is:

*Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.*

The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models), and additional checkpoints are available at [openai](https://huggingface.co/openai).

The pipeline was contributed by [dg845](https://github.com/dg845) and [ayushtues](https://huggingface.co/ayushtues). ❤️

## Tips

For an additional speed-up, use `torch.compile` to generate multiple images in <1 second:

```diff
  import torch
  from diffusers import ConsistencyModelPipeline

  device = "cuda"
  # Load the cd_bedroom256_lpips checkpoint.
  model_id_or_path = "openai/diffusers-cd_bedroom256_lpips"
  pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
  pipe.to(device)

+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)

  # Multistep sampling
  # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo:
  # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83
  for _ in range(10):
      image = pipe(timesteps=[17, 0]).images[0]
      image.show()
```


## ConsistencyModelPipeline[[diffusers.ConsistencyModelPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ConsistencyModelPipeline</name><anchor>diffusers.ConsistencyModelPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py#L67</source><parameters>[{"name": "unet", "val": ": UNet2DModel"}, {"name": "scheduler", "val": ": CMStochasticIterativeScheduler"}]</parameters><paramsdesc>- **unet** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  A `UNet2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Currently only
  compatible with [CMStochasticIterativeScheduler](/docs/diffusers/main/en/api/schedulers/cm_stochastic_iterative#diffusers.CMStochasticIterativeScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for unconditional or class-conditional image generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.ConsistencyModelPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py#L171</source><parameters>[{"name": "batch_size", "val": ": int = 1"}, {"name": "class_labels", "val": ": typing.Union[torch.Tensor, typing.List[int], int, NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 1"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **batch_size** (`int`, *optional*, defaults to 1) --
  The number of images to generate.
- **class_labels** (`torch.Tensor` or `List[int]` or `int`, *optional*) --
  Optional class labels for conditioning class-conditional consistency models. Not used if the model is
  not class-conditional.
- **num_inference_steps** (`int`, *optional*, defaults to 1) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>



<ExampleCodeBlock anchor="diffusers.ConsistencyModelPipeline.__call__.example">

Examples:
```py
>>> import torch

>>> from diffusers import ConsistencyModelPipeline

>>> device = "cuda"
>>> # Load the cd_imagenet64_l2 checkpoint.
>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2"
>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe.to(device)

>>> # Onestep Sampling
>>> image = pipe(num_inference_steps=1).images[0]
>>> image.save("cd_imagenet64_l2_onestep_sample.png")

>>> # Onestep sampling, class-conditional image generation
>>> # ImageNet-64 class label 145 corresponds to king penguins
>>> image = pipe(num_inference_steps=1, class_labels=145).images[0]
>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png")

>>> # Multistep sampling, class-conditional image generation
>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo:
>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77
>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0]
>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png")
```

</ExampleCodeBlock>







</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/consistency_models.md" />

### Stable unCLIP
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_unclip.md

# Stable unCLIP

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Stable unCLIP checkpoints are finetuned from [Stable Diffusion 2.1](./stable_diffusion/stable_diffusion_2) checkpoints to condition on CLIP image embeddings.
Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used
for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation.

The abstract from the paper is:

*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*

## Tips

Stable unCLIP takes  `noise_level` as input during inference which determines how much noise is added to the image embeddings. A higher `noise_level` increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (`noise_level = 0`).

### Text-to-Image Generation
Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha):

```python
import torch
from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline
from diffusers.models import PriorTransformer
from transformers import CLIPTokenizer, CLIPTextModelWithProjection

prior_model_id = "kakaobrain/karlo-v1-alpha"
data_type = torch.float16
prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type)

prior_text_model_id = "openai/clip-vit-large-patch14"
prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id)
prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type)
prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler")
prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config)

stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small"

pipe = StableUnCLIPPipeline.from_pretrained(
    stable_unclip_model_id,
    torch_dtype=data_type,
    variant="fp16",
    prior_tokenizer=prior_tokenizer,
    prior_text_encoder=prior_text_model,
    prior=prior,
    prior_scheduler=prior_scheduler,
)

pipe = pipe.to("cuda")
wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular"

image = pipe(prompt=wave_prompt).images[0]
image
```
> [!WARNING]
> For text-to-image we use `stabilityai/stable-diffusion-2-1-unclip-small` as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. [stabilityai/stable-diffusion-2-1-unclip](https://hf.co/stabilityai/stable-diffusion-2-1-unclip) was trained on OpenCLIP ViT-H, so we don't recommend its use.

### Text guided Image-to-Image Variation

```python
from diffusers import StableUnCLIPImg2ImgPipeline
from diffusers.utils import load_image
import torch

pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
)
pipe = pipe.to("cuda")

url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
init_image = load_image(url)

images = pipe(init_image).images
images[0].save("variation_image.png")
```

Optionally, you can also pass a prompt to `pipe` such as:

```python
prompt = "A fantasy landscape, trending on artstation"

image = pipe(init_image, prompt=prompt).images[0]
image
```

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableUnCLIPPipeline[[diffusers.StableUnCLIPPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableUnCLIPPipeline</name><anchor>diffusers.StableUnCLIPPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py#L70</source><parameters>[{"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "prior", "val": ": PriorTransformer"}, {"name": "prior_scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_normalizer", "val": ": StableUnCLIPImageNormalizer"}, {"name": "image_noising_scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "vae", "val": ": AutoencoderKL"}]</parameters><paramsdesc>- **prior_tokenizer** (`CLIPTokenizer`) --
  A `CLIPTokenizer`.
- **prior_text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen `CLIPTextModelWithProjection` text-encoder.
- **prior** ([PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer)) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **prior_scheduler** (`KarrasDiffusionSchedulers`) --
  Scheduler used in the prior denoising process.
- **image_normalizer** (`StableUnCLIPImageNormalizer`) --
  Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image
  embeddings after the noise has been applied.
- **image_noising_scheduler** (`KarrasDiffusionSchedulers`) --
  Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined
  by the `noise_level`.
- **tokenizer** (`CLIPTokenizer`) --
  A `CLIPTokenizer`.
- **text_encoder** (`CLIPTextModel`) --
  Frozen `CLIPTextModel` text-encoder.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) to denoise the encoded image latents.
- **scheduler** (`KarrasDiffusionSchedulers`) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using stable unCLIP.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableUnCLIPPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py#L645</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "guidance_scale", "val": ": float = 10.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "noise_level", "val": ": int = 0"}, {"name": "prior_num_inference_steps", "val": ": int = 25"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "prior_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **noise_level** (`int`, *optional*, defaults to `0`) --
  The amount of noise to add to the image embeddings. A higher `noise_level` increases the variance in
  the final un-noised images. See [StableUnCLIPPipeline.noise_image_embeddings()](/docs/diffusers/main/en/api/pipelines/stable_unclip#diffusers.StableUnCLIPPipeline.noise_image_embeddings) for more details.
- **prior_num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps in the prior denoising process. More denoising steps usually lead to a
  higher quality image at the expense of slower inference.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **prior_latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  embedding generation in the prior denoising process. Can be used to tweak the same generation with
  different prompts. If not provided, a latents tensor is generated by sampling using the supplied random
  `generator`.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>`~ pipeline_utils.ImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When returning
a tuple, the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableUnCLIPPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableUnCLIPPipeline

>>> pipe = StableUnCLIPPipeline.from_pretrained(
...     "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16
... )  # TODO update model path
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> images = pipe(prompt).images
>>> images[0].save("astronaut_horse.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableUnCLIPPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableUnCLIPPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableUnCLIPPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableUnCLIPPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableUnCLIPPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableUnCLIPPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableUnCLIPPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableUnCLIPPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableUnCLIPPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py#L297</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>noise_image_embeddings</name><anchor>diffusers.StableUnCLIPPipeline.noise_image_embeddings</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py#L599</source><parameters>[{"name": "image_embeds", "val": ": Tensor"}, {"name": "noise_level", "val": ": int"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters></docstring>

Add noise to the image embeddings. The amount of noise is controlled by a `noise_level` input. A higher
`noise_level` increases the variance in the final un-noised images.

The noise is applied in two ways:
1. A noise schedule is applied directly to the embeddings.
2. A vector of sinusoidal time embeddings are appended to the output.

In both cases, the amount of noise is controlled by the same `noise_level`.

The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.


</div></div>

## StableUnCLIPImg2ImgPipeline[[diffusers.StableUnCLIPImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableUnCLIPImg2ImgPipeline</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py#L81</source><parameters>[{"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "image_normalizer", "val": ": StableUnCLIPImageNormalizer"}, {"name": "image_noising_scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "vae", "val": ": AutoencoderKL"}]</parameters><paramsdesc>- **feature_extractor** (`CLIPImageProcessor`) --
  Feature extractor for image pre-processing before being encoded.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  CLIP vision model for encoding images.
- **image_normalizer** (`StableUnCLIPImageNormalizer`) --
  Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image
  embeddings after the noise has been applied.
- **image_noising_scheduler** (`KarrasDiffusionSchedulers`) --
  Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined
  by the `noise_level`.
- **tokenizer** (`~transformers.CLIPTokenizer`) --
  A [`~transformers.CLIPTokenizer`)].
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen [CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel) text-encoder.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) to denoise the encoded image latents.
- **scheduler** (`KarrasDiffusionSchedulers`) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image-to-image generation using stable unCLIP.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py#L624</source><parameters>[{"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "guidance_scale", "val": ": float = 10"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "noise_level", "val": ": int = 0"}, {"name": "image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, either `prompt_embeds` will be
  used or prompt is initialized to `""`.
- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  `Image` or tensor representing an image batch. The image is encoded to its CLIP embedding which the
  `unet` is conditioned on. The image is _not_ encoded by the `vae` and then used as the latents in the
  denoising process like it is in the standard Stable Diffusion text-guided image variation process.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 10.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **noise_level** (`int`, *optional*, defaults to `0`) --
  The amount of noise to add to the image embeddings. A higher `noise_level` increases the variance in
  the final un-noised images. See [StableUnCLIPPipeline.noise_image_embeddings()](/docs/diffusers/main/en/api/pipelines/stable_unclip#diffusers.StableUnCLIPPipeline.noise_image_embeddings) for more details.
- **image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated CLIP embeddings to condition the `unet` on. These latents are not used in the denoising
  process. If you want to provide pre-generated latents, pass them to `__call__` as `latents`.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>`~ pipeline_utils.ImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When returning
a tuple, the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableUnCLIPImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO

>>> from diffusers import StableUnCLIPImg2ImgPipeline

>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-1-unclip-small", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))

>>> prompt = "A fantasy landscape, trending on artstation"

>>> images = pipe(init_image, prompt).images
>>> images[0].save("fantasy_landscape.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableUnCLIPImg2ImgPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableUnCLIPImg2ImgPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py#L259</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>noise_image_embeddings</name><anchor>diffusers.StableUnCLIPImg2ImgPipeline.noise_image_embeddings</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py#L578</source><parameters>[{"name": "image_embeds", "val": ": Tensor"}, {"name": "noise_level", "val": ": int"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters></docstring>

Add noise to the image embeddings. The amount of noise is controlled by a `noise_level` input. A higher
`noise_level` increases the variance in the final un-noised images.

The noise is applied in two ways:
1. A noise schedule is applied directly to the embeddings.
2. A vector of sinusoidal time embeddings are appended to the output.

In both cases, the amount of noise is controlled by the same `noise_level`.

The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.


</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_unclip.md" />

### ControlNetUnion
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_union.md

# ControlNetUnion

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

ControlNetUnionModel is an implementation of ControlNet for Stable Diffusion XL.

The ControlNet model was introduced in [ControlNetPlus](https://github.com/xinsir6/ControlNetPlus) by xinsir6. It supports multiple conditioning inputs without increasing computation.

*We design a new architecture that can support 10+ control types in condition text-to-image generation and can generate high resolution images visually comparable with midjourney. The network is based on the original ControlNet architecture, we propose two new modules to: 1 Extend the original ControlNet to support different image conditions using the same network parameter. 2 Support multiple conditions input without increasing computation offload, which is especially important for designers who want to edit image in detail, different conditions use the same condition encoder, without adding extra computations or parameters.*


## StableDiffusionXLControlNetUnionPipeline[[diffusers.StableDiffusionXLControlNetUnionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetUnionPipeline</name><anchor>diffusers.StableDiffusionXLControlNetUnionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl.py#L178</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel, typing.List[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], typing.Tuple[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], diffusers.models.controlnets.multicontrolnet_union.MultiControlNetUnionModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_encoder_2** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Second frozen text-encoder
  ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **tokenizer_2** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel)`) --
  Provides additional conditioning to the `unet` during the denoising process.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings should always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark](https://github.com/ShieldMnt/invisible-watermark/) library to
  watermark output images. If not defined, it defaults to `True` if the package is installed; otherwise no
  watermarker is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetUnionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl.py#L985</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], typing.List[typing.List[int]], NoneType] = None"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **control_image** (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_mode** (`int` or `List[int]` or `List[List[int]], *optional*) --
  The control condition types for the ControlNet. See the ControlNet's model card forinformation on the
  available control modes. If multiple ControlNets are specified in `init`, control_mode should be a list
  where each ControlNet should have its corresponding control mode list. Should reflect the order of
  conditions in control_image.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetUnionPipeline.__call__.example">

Examples:
```py
>>> # !pip install controlnet_aux
>>> from controlnet_aux import LineartAnimeDetector
>>> from diffusers import StableDiffusionXLControlNetUnionPipeline, ControlNetUnionModel, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import torch

>>> prompt = "A cat"
>>> # download an image
>>> image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png"
... ).resize((1024, 1024))
>>> # initialize the models and pipeline
>>> controlnet = ControlNetUnionModel.from_pretrained(
...     "xinsir/controlnet-union-sdxl-1.0", torch_dtype=torch.float16
... )
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> pipe = StableDiffusionXLControlNetUnionPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     controlnet=controlnet,
...     vae=vae,
...     torch_dtype=torch.float16,
...     variant="fp16",
... )
>>> pipe.enable_model_cpu_offload()
>>> # prepare image
>>> processor = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
>>> controlnet_img = processor(image, output_type="pil")
>>> # generate image
>>> image = pipe(prompt, control_image=[controlnet_img], control_mode=[3], height=1024, width=1024).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetUnionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl.py#L293</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLControlNetUnionPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl.py#L924</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLControlNetUnionImg2ImgPipeline[[diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline</name><anchor>diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl_img2img.py#L192</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel, typing.List[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], typing.Tuple[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], diffusers.models.controlnets.multicontrolnet_union.MultiControlNetUnionModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **controlnet** ([ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel)) --
  Provides additional conditioning to the unet during the denoising process.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires an `aesthetic_score` condition to be passed during inference. Also see the
  config of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl_img2img.py#L1078</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.8"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], typing.List[typing.List[int]], NoneType] = None"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The initial image will be used as the starting point for the image generation process. Can also accept
  image latents as `image`, if passing latents directly, it will not be encoded again.
- **control_image** (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to the size of control_image) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to the size of control_image) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
  you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_mode** (`int` or `List[int]` or `List[List[int]], *optional*) --
  The control condition types for the ControlNet. See the ControlNet's model card forinformation on the
  available control modes. If multiple ControlNets are specified in `init`, control_mode should be a list
  where each ControlNet should have its corresponding control mode list. Should reflect the order of
  conditions in control_image
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) if `return_dict` is True, otherwise a `tuple`
containing the output images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline.__call__.example">

Examples:
```py
# !pip install controlnet_aux
from diffusers import (
    StableDiffusionXLControlNetUnionImg2ImgPipeline,
    ControlNetUnionModel,
    AutoencoderKL,
)
from diffusers.utils import load_image
import torch
from PIL import Image
import numpy as np

prompt = "A cat"
# download an image
image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png"
)
# initialize the models and pipeline
controlnet = ControlNetUnionModel.from_pretrained(
    "brad-twinkl/controlnet-union-sdxl-1.0-promax", torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetUnionImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    vae=vae,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")
# `enable_model_cpu_offload` is not recommended due to multiple generations
height = image.height
width = image.width
ratio = np.sqrt(1024.0 * 1024.0 / (width * height))
# 3 * 3 upscale correspond to 16 * 3 multiply, 2 * 2 correspond to 16 * 2 multiply and so on.
scale_image_factor = 3
base_factor = 16
factor = scale_image_factor * base_factor
W, H = int(width * ratio) // factor * factor, int(height * ratio) // factor * factor
image = image.resize((W, H))
target_width = W // scale_image_factor
target_height = H // scale_image_factor
images = []
crops_coords_list = [
    (0, 0),
    (0, width // 2),
    (height // 2, 0),
    (width // 2, height // 2),
    0,
    0,
    0,
    0,
    0,
]
for i in range(scale_image_factor):
    for j in range(scale_image_factor):
        left = j * target_width
        top = i * target_height
        right = left + target_width
        bottom = top + target_height
        cropped_image = image.crop((left, top, right, bottom))
        cropped_image = cropped_image.resize((W, H))
        images.append(cropped_image)
# set ControlNetUnion input
result_images = []
for sub_img, crops_coords in zip(images, crops_coords_list):
    new_width, new_height = W, H
    out = pipe(
        prompt=[prompt] * 1,
        image=sub_img,
        control_image=[sub_img],
        control_mode=[6],
        width=new_width,
        height=new_height,
        num_inference_steps=30,
        crops_coords_top_left=(W, H),
        target_size=(W, H),
        original_size=(W * 2, H * 2),
    )
    result_images.append(out.images[0])
new_im = Image.new("RGB", (new_width * scale_image_factor, new_height * scale_image_factor))
new_im.paste(result_images[0], (0, 0))
new_im.paste(result_images[1], (new_width, 0))
new_im.paste(result_images[2], (new_width * 2, 0))
new_im.paste(result_images[3], (0, new_height))
new_im.paste(result_images[4], (new_width, new_height))
new_im.paste(result_images[5], (new_width * 2, new_height))
new_im.paste(result_images[6], (0, new_height * 2))
new_im.paste(result_images[7], (new_width, new_height * 2))
new_im.paste(result_images[8], (new_width * 2, new_height * 2))
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_sd_xl_img2img.py#L313</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionXLControlNetUnionInpaintPipeline[[diffusers.StableDiffusionXLControlNetUnionInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetUnionInpaintPipeline</name><anchor>diffusers.StableDiffusionXLControlNetUnionInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_inpaint_sd_xl.py#L164</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel, typing.List[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], typing.Tuple[diffusers.models.controlnets.controlnet_union.ControlNetUnionModel], diffusers.models.controlnets.multicontrolnet_union.MultiControlNetUnionModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetUnionInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_inpaint_sd_xl.py#L1157</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], typing.List[typing.List[int]], NoneType] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **control_image** (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 0.9999) --
  Conceptually, indicates how much to transform the masked portion of the reference `image`. Must be
  between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the
  `strength`. The number of denoising steps depends on the amount of noise initially added. When
  `strength` is 1, added noise will be maximum and the denoising process will run for the full number of
  iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores the masked
  portion of the reference `image`. Note that in the case of `denoising_start` being declared as an
  integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_mode** (`int` or `List[int]` or `List[List[int]], *optional*) --
  The control condition types for the ControlNet. See the ControlNet's model card forinformation on the
  available control modes. If multiple ControlNets are specified in `init`, control_mode should be a list
  where each ControlNet should have its corresponding control mode list. Should reflect the order of
  conditions in control_image.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. `tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetUnionInpaintPipeline.__call__.example">

Examples:
```py
from diffusers import StableDiffusionXLControlNetUnionInpaintPipeline, ControlNetUnionModel, AutoencoderKL
from diffusers.utils import load_image
import torch
import numpy as np
from PIL import Image

prompt = "A cat"
# download an image
image = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/in_paint/overture-creations-5sI6fQgYIuo.png"
).resize((1024, 1024))
mask = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/in_paint/overture-creations-5sI6fQgYIuo_mask.png"
).resize((1024, 1024))
# initialize the models and pipeline
controlnet = ControlNetUnionModel.from_pretrained(
    "brad-twinkl/controlnet-union-sdxl-1.0-promax", torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetUnionInpaintPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    vae=vae,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe.enable_model_cpu_offload()
controlnet_img = image.copy()
controlnet_img_np = np.array(controlnet_img)
mask_np = np.array(mask)
controlnet_img_np[mask_np > 0] = 0
controlnet_img = Image.fromarray(controlnet_img_np)
# generate image
image = pipe(prompt, image=image, mask_image=mask, control_image=[controlnet_img], control_mode=[7]).images[0]
image.save("inpaint.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetUnionInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_union_inpaint_sd_xl.py#L284</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_union.md" />

### UniDiffuser
https://huggingface.co/docs/diffusers/main/api/pipelines/unidiffuser.md

# UniDiffuser

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The UniDiffuser model was proposed in [One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale](https://huggingface.co/papers/2303.06555) by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu.

The abstract from the paper is:

*This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is -- learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).*

You can find the original codebase at [thu-ml/unidiffuser](https://github.com/thu-ml/unidiffuser) and additional checkpoints at [thu-ml](https://huggingface.co/thu-ml).

> [!WARNING]
> There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become `NaNs`. This issue can be mitigated by switching to PyTorch 2.X.

This pipeline was contributed by [dg845](https://github.com/dg845). ❤️

## Usage Examples

Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks:

### Unconditional Image and Text Generation

Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a [UniDiffuserPipeline](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.UniDiffuserPipeline) will produce a (image, text) pair:

```python
import torch

from diffusers import UniDiffuserPipeline

device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Unconditional image and text generation. The generation task is automatically inferred.
sample = pipe(num_inference_steps=20, guidance_scale=8.0)
image = sample.images[0]
text = sample.text[0]
image.save("unidiffuser_joint_sample_image.png")
print(text)
```

This is also called "joint" generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution.

Note that the generation task is inferred from the inputs used when calling the pipeline.
It is also possible to manually specify the unconditional generation task ("mode") manually with [UniDiffuserPipeline.set_joint_mode()](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.UniDiffuserPipeline.set_joint_mode):

```python
# Equivalent to the above.
pipe.set_joint_mode()
sample = pipe(num_inference_steps=20, guidance_scale=8.0)
```

When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode.
You can reset the mode with [UniDiffuserPipeline.reset_mode()](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.UniDiffuserPipeline.reset_mode), after which the pipeline will once again infer the mode.

You can also generate only an image or only text (which the UniDiffuser paper calls "marginal" generation since we sample from the marginal distribution of images and text, respectively):

```python
# Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance
# Image-only generation
pipe.set_image_mode()
sample_image = pipe(num_inference_steps=20).images[0]
# Text-only generation
pipe.set_text_mode()
sample_text = pipe(num_inference_steps=20).text[0]
```

### Text-to-Image Generation

UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image.
Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation):

```python
import torch

from diffusers import UniDiffuserPipeline

device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Text-to-image generation
prompt = "an elephant under the sea"

sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
t2i_image = sample.images[0]
t2i_image
```

The `text2img` mode requires that either an input `prompt` or `prompt_embeds` be supplied. You can set the `text2img` mode manually with [UniDiffuserPipeline.set_text_to_image_mode()](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.UniDiffuserPipeline.set_text_to_image_mode).

### Image-to-Text Generation

Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation):

```python
import torch

from diffusers import UniDiffuserPipeline
from diffusers.utils import load_image

device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Image-to-text generation
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
init_image = load_image(image_url).resize((512, 512))

sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
i2t_text = sample.text[0]
print(i2t_text)
```

The `img2text` mode requires that an input `image` be supplied. You can set the `img2text` mode manually with [UniDiffuserPipeline.set_image_to_text_mode()](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.UniDiffuserPipeline.set_image_to_text_mode).

### Image Variation

The UniDiffuser authors suggest performing image variation through a "round-trip" generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation.
This produces a new image which is semantically similar to the input image:

```python
import torch

from diffusers import UniDiffuserPipeline
from diffusers.utils import load_image

device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Image variation can be performed with an image-to-text generation followed by a text-to-image generation:
# 1. Image-to-text generation
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
init_image = load_image(image_url).resize((512, 512))

sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
i2t_text = sample.text[0]
print(i2t_text)

# 2. Text-to-image generation
sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0)
final_image = sample.images[0]
final_image.save("unidiffuser_image_variation_sample.png")
```

### Text Variation

Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation:

```python
import torch

from diffusers import UniDiffuserPipeline

device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)

# Text variation can be performed with a text-to-image generation followed by a image-to-text generation:
# 1. Text-to-image generation
prompt = "an elephant under the sea"

sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
t2i_image = sample.images[0]
t2i_image.save("unidiffuser_text2img_sample_image.png")

# 2. Image-to-text generation
sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0)
final_prompt = sample.text[0]
print(final_prompt)
```

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## UniDiffuserPipeline[[diffusers.UniDiffuserPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UniDiffuserPipeline</name><anchor>diffusers.UniDiffuserPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L65</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "clip_image_processor", "val": ": CLIPImageProcessor"}, {"name": "clip_tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_decoder", "val": ": UniDiffuserTextDecoder"}, {"name": "text_tokenizer", "val": ": GPT2Tokenizer"}, {"name": "unet", "val": ": UniDiffuserModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This
  is part of the UniDiffuser image representation along with the CLIP vision encoding.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **image_encoder** (`CLIPVisionModel`) --
  A [CLIPVisionModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModel) to encode images as part of its image representation along with the VAE
  latent representation.
- **image_processor** (`CLIPImageProcessor`) --
  [CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor) to preprocess an image before CLIP encoding it with `image_encoder`.
- **clip_tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize the prompt before encoding it with `text_encoder`.
- **text_decoder** (`UniDiffuserTextDecoder`) --
  Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser
  embedding.
- **text_tokenizer** (`GPT2Tokenizer`) --
  A [GPT2Tokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2#transformers.GPT2Tokenizer) to decode text for text generation; used along with the `text_decoder`.
- **unet** (`UniDiffuserModel`) --
  A [U-ViT](https://github.com/baofff/U-ViT) model with UNNet-style skip connections between transformer
  layers to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image and/or text latents. The
  original UniDiffuser paper uses the [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned
image generation, image-conditioned text generation, and joint image-text generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.UniDiffuserPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L1119</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "data_type", "val": ": typing.Optional[int] = 1"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 8.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "num_prompts_per_image", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "vae_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clip_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
  Required for text-conditioned image generation (`text2img`) mode.
- **image** (`torch.Tensor` or `PIL.Image.Image`, *optional*) --
  `Image` or tensor representing an image batch. Required for image-conditioned text generation
  (`img2text`) mode.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **data_type** (`int`, *optional*, defaults to 1) --
  The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type
  embedding; this is added for compatibility with the
  [UniDiffuser-v1](https://huggingface.co/thu-ml/unidiffuser-v1) checkpoint.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 8.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). Used in
  text-conditioned image generation (`text2img`) mode.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt. Used in `text2img` (text-conditioned image generation) and
  `img` mode. If the mode is joint and both `num_images_per_prompt` and `num_prompts_per_image` are
  supplied, `min(num_images_per_prompt, num_prompts_per_image)` samples are generated.
- **num_prompts_per_image** (`int`, *optional*, defaults to 1) --
  The number of prompts to generate per image. Used in `img2text` (image-conditioned text generation) and
  `text` mode. If the mode is joint and both `num_images_per_prompt` and `num_prompts_per_image` are
  supplied, `min(num_images_per_prompt, num_prompts_per_image)` samples are generated.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint
  image-text generation. Can be used to tweak the same generation with different prompts. If not
  provided, a latents tensor is generated by sampling using the supplied random `generator`. This assumes
  a full set of VAE, CLIP, and text latents, if supplied, overrides the value of `prompt_latents`,
  `vae_latents`, and `clip_latents`.
- **prompt_latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **vae_latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **clip_latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument. Used in text-conditioned
  image generation (`text2img`) mode.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are be generated from the `negative_prompt` input argument. Used
  in text-conditioned image generation (`text2img`) mode.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImageTextPipelineOutput](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.ImageTextPipelineOutput) instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImageTextPipelineOutput](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.ImageTextPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImageTextPipelineOutput](/docs/diffusers/main/en/api/pipelines/unidiffuser#diffusers.ImageTextPipelineOutput) is returned, otherwise a
`tuple` is returned where the first element is a list with the generated images and the second element
is a list of generated texts.</retdesc></docstring>

The call function to the pipeline for generation.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.UniDiffuserPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L244</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.UniDiffuserPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L273</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.UniDiffuserPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L230</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.UniDiffuserPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L258</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.UniDiffuserPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L421</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reset_mode</name><anchor>diffusers.UniDiffuserPipeline.reset_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L307</source><parameters>[]</parameters></docstring>
Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_image_mode</name><anchor>diffusers.UniDiffuserPipeline.set_image_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L291</source><parameters>[]</parameters></docstring>
Manually set the generation mode to unconditional ("marginal") image generation.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_image_to_text_mode</name><anchor>diffusers.UniDiffuserPipeline.set_image_to_text_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L299</source><parameters>[]</parameters></docstring>
Manually set the generation mode to image-conditioned text generation.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_joint_mode</name><anchor>diffusers.UniDiffuserPipeline.set_joint_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L303</source><parameters>[]</parameters></docstring>
Manually set the generation mode to unconditional joint image-text generation.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_text_mode</name><anchor>diffusers.UniDiffuserPipeline.set_text_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L287</source><parameters>[]</parameters></docstring>
Manually set the generation mode to unconditional ("marginal") text generation.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_text_to_image_mode</name><anchor>diffusers.UniDiffuserPipeline.set_text_to_image_mode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L295</source><parameters>[]</parameters></docstring>
Manually set the generation mode to text-conditioned image generation.

</div></div>

## ImageTextPipelineOutput[[diffusers.ImageTextPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImageTextPipelineOutput</name><anchor>diffusers.ImageTextPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py#L48</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType]"}, {"name": "text", "val": ": typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **text** (`List[str]` or `List[List[str]]`) --
  List of generated text strings of length `batch_size` or a list of list of strings whose outer list has
  length `batch_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for joint image-text pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/unidiffuser.md" />

### AudioLDM 2
https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2.md

# AudioLDM 2

AudioLDM 2 was proposed in [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://huggingface.co/papers/2308.05734) by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music.

Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM 2 is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap) and the encoder of [Flan-T5](https://huggingface.co/docs/transformers/main/en/model_doc/flan-t5). These text embeddings are then projected to a shared embedding space by an [AudioLDM2ProjectionModel](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2#diffusers.AudioLDM2ProjectionModel). A [GPT2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2) _language model (LM)_ is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The [UNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2UNet2DConditionModel) of AudioLDM 2 is unique in the sense that it takes **two** cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs.

The abstract of the paper is the following:

*Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called "language of audio" (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at [this https URL](https://audioldm.github.io/audioldm2).*

This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi) and [Nguyễn Công Tú Anh](https://github.com/tuanh123789). The original codebase can be
found at [haoheliu/audioldm2](https://github.com/haoheliu/audioldm2).

## Tips

### Choosing a checkpoint

AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation.

All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet.
See table below for details on the three checkpoints:

| Checkpoint                                                      | Task          | UNet Model Size | Total Model Size | Training Data / h |
|-----------------------------------------------------------------|---------------|-----------------|------------------|-------------------|
| [audioldm2](https://huggingface.co/cvssp/audioldm2)             | Text-to-audio | 350M            | 1.1B             | 1150k             |
| [audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 750M            | 1.5B             | 1150k             |
| [audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 350M            | 1.1B             | 665k              |
| [audioldm2-gigaspeech](https://huggingface.co/anhnct/audioldm2_gigaspeech) | Text-to-speech | 350M            | 1.1B             |10k              |
| [audioldm2-ljspeech](https://huggingface.co/anhnct/audioldm2_ljspeech) | Text-to-speech | 350M            | 1.1B             |              |

### Constructing a prompt

* Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g. "water stream in a forest" instead of "stream").
* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.
* Using a **negative prompt** can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of "Low quality."

### Controlling inference

* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.

### Evaluating generated waveforms:

* The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation.
* Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.

The following example demonstrates how to construct good music and speech generation using the aforementioned tips: [example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.example).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## AudioLDM2Pipeline[[diffusers.AudioLDM2Pipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioLDM2Pipeline</name><anchor>diffusers.AudioLDM2Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L150</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": ClapModel"}, {"name": "text_encoder_2", "val": ": typing.Union[transformers.models.t5.modeling_t5.T5EncoderModel, transformers.models.vits.modeling_vits.VitsModel]"}, {"name": "projection_model", "val": ": AudioLDM2ProjectionModel"}, {"name": "language_model", "val": ": GPT2LMHeadModel"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.roberta.tokenization_roberta.RobertaTokenizer, transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast]"}, {"name": "tokenizer_2", "val": ": typing.Union[transformers.models.t5.tokenization_t5.T5Tokenizer, transformers.models.t5.tokenization_t5_fast.T5TokenizerFast, transformers.models.vits.tokenization_vits.VitsTokenizer]"}, {"name": "feature_extractor", "val": ": ClapFeatureExtractor"}, {"name": "unet", "val": ": AudioLDM2UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "vocoder", "val": ": SpeechT5HifiGan"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([ClapModel](https://huggingface.co/docs/transformers/main/en/model_doc/clap#transformers.ClapModel)) --
  First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model
  [CLAP](https://huggingface.co/docs/transformers/model_doc/clap#transformers.CLAPTextModelWithProjection),
  specifically the [laion/clap-htsat-unfused](https://huggingface.co/laion/clap-htsat-unfused) variant. The
  text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to
  rank generated waveforms against the text prompt by computing similarity scores.
- **text_encoder_2** ([`~transformers.T5EncoderModel`, `~transformers.VitsModel`]) --
  Second frozen text-encoder. AudioLDM2 uses the encoder of
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) variant. Second frozen text-encoder use
  for TTS. AudioLDM2 uses the encoder of
  [Vits](https://huggingface.co/docs/transformers/model_doc/vits#transformers.VitsModel).
- **projection_model** ([AudioLDM2ProjectionModel](/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2ProjectionModel)) --
  A trained model used to linearly project the hidden-states from the first and second text encoder models
  and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are
  concatenated to give the input to the language model. A Learned Position Embedding for the Vits
  hidden-states
- **language_model** ([GPT2Model](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2#transformers.GPT2Model)) --
  An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected
  outputs from the two text encoders.
- **tokenizer** ([RobertaTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/roberta#transformers.RobertaTokenizer)) --
  Tokenizer to tokenize text for the first frozen text-encoder.
- **tokenizer_2** ([`~transformers.T5Tokenizer`, `~transformers.VitsTokenizer`]) --
  Tokenizer to tokenize text for the second frozen text-encoder.
- **feature_extractor** ([ClapFeatureExtractor](https://huggingface.co/docs/transformers/main/en/model_doc/clap#transformers.ClapFeatureExtractor)) --
  Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded audio latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded audio latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **vocoder** ([SpeechT5HifiGan](https://huggingface.co/docs/transformers/main/en/model_doc/speecht5#transformers.SpeechT5HifiGan)) --
  Vocoder of class `SpeechT5HifiGan` to convert the mel-spectrogram latents to the final audio waveform.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-audio generation using AudioLDM2.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AudioLDM2Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L861</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "transcription", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "audio_length_in_s", "val": ": typing.Optional[float] = None"}, {"name": "num_inference_steps", "val": ": int = 200"}, {"name": "guidance_scale", "val": ": float = 3.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_waveforms_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "generated_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_generated_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide audio generation. If not defined, you need to pass `prompt_embeds`.
- **transcription** (`str` or `List[str]`, *optional*) --\
  The transcript for text to speech.
- **audio_length_in_s** (`int`, *optional*, defaults to 10.24) --
  The length of the generated audio sample in seconds.
- **num_inference_steps** (`int`, *optional*, defaults to 200) --
  The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  A higher guidance scale value encourages the model to generate audio that is closely linked to the text
  `prompt` at the expense of lower sound quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_waveforms_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of waveforms to generate per prompt. If `num_waveforms_per_prompt > 1`, then automatic
  scoring is performed between the generated outputs and the text prompt. This scoring ranks the
  generated waveforms based on their cosine similarity with the text input in the joint text-audio
  embedding space.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **generated_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings from the GPT2 language model. Can be used to easily tweak text inputs,
  *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input
  argument.
- **negative_generated_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text
  inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be computed from
  `negative_prompt` input argument.
- **attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `prompt_embeds`. If not provided, attention mask will
  be computed from `prompt` input argument.
- **negative_attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `negative_prompt_embeds`. If not provided, attention
  mask will be computed from `negative_prompt` input argument.
- **max_new_tokens** (`int`, *optional*, defaults to None) --
  Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will
  be taken from the config of the model.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated audio. Choose between `"np"` to return a NumPy `np.ndarray` or
  `"pt"` to return a PyTorch `torch.Tensor` object. Set to `"latent"` to return the latent diffusion
  model (LDM) output.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated audio.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AudioLDM2Pipeline.__call__.example">

Examples:
```py
>>> import scipy
>>> import torch
>>> from diffusers import AudioLDM2Pipeline

>>> repo_id = "cvssp/audioldm2"
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> # define the prompts
>>> prompt = "The sound of a hammer hitting a wooden surface."
>>> negative_prompt = "Low quality."

>>> # set the seed for generator
>>> generator = torch.Generator("cuda").manual_seed(0)

>>> # run the generation
>>> audio = pipe(
...     prompt,
...     negative_prompt=negative_prompt,
...     num_inference_steps=200,
...     audio_length_in_s=10.0,
...     num_waveforms_per_prompt=3,
...     generator=generator,
... ).audios

>>> # save the best audio sample (index 0) as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0])
```

</ExampleCodeBlock>
<ExampleCodeBlock anchor="diffusers.AudioLDM2Pipeline.__call__.example-2">

```
#Using AudioLDM2 for Text To Speech
>>> import scipy
>>> import torch
>>> from diffusers import AudioLDM2Pipeline

>>> repo_id = "anhnct/audioldm2_gigaspeech"
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> # define the prompts
>>> prompt = "A female reporter is speaking"
>>> transcript = "wish you have a good day"

>>> # set the seed for generator
>>> generator = torch.Generator("cuda").manual_seed(0)

>>> # run the generation
>>> audio = pipe(
...     prompt,
...     transcription=transcript,
...     num_inference_steps=200,
...     audio_length_in_s=10.0,
...     num_waveforms_per_prompt=2,
...     generator=generator,
...     max_new_tokens=512,          #Must set max_new_tokens equa to 512 for TTS
... ).audios

>>> # save the best audio sample (index 0) as a .wav file
>>> scipy.io.wavfile.write("tts.wav", rate=16000, data=audio[0])
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.AudioLDM2Pipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L241</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.AudioLDM2Pipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L254</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = 'cuda'"}]</parameters></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.AudioLDM2Pipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L227</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AudioLDM2Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L356</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_waveforms_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "transcription", "val": " = None"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "generated_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_generated_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "max_new_tokens", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **transcription** (`str` or `List[str]`) --
  transcription of text to speech
- **device** (`torch.device`) --
  torch device
- **num_waveforms_per_prompt** (`int`) --
  number of waveforms that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the audio generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, *e.g.*
  prompt weighting. If not provided, text embeddings will be computed from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs,
  *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be computed from
  `negative_prompt` input argument.
- **generated_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings from the GPT2 language model. Can be used to easily tweak text inputs,
  *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input
  argument.
- **negative_generated_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text
  inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be computed from
  `negative_prompt` input argument.
- **attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `prompt_embeds`. If not provided, attention mask will
  be computed from `prompt` input argument.
- **negative_attention_mask** (`torch.LongTensor`, *optional*) --
  Pre-computed attention mask to be applied to the `negative_prompt_embeds`. If not provided, attention
  mask will be computed from `negative_prompt` input argument.
- **max_new_tokens** (`int`, *optional*, defaults to None) --
  The number of new tokens to generate with the GPT2 language model.</paramsdesc><paramgroups>0</paramgroups><rettype>prompt_embeds (`torch.Tensor`)</rettype><retdesc>Text embeddings from the Flan T5 model.
attention_mask (`torch.LongTensor`):
Attention mask to be applied to the `prompt_embeds`.
generated_prompt_embeds (`torch.Tensor`):
Text embeddings generated from the GPT2 language model.</retdesc></docstring>

Encodes the prompt into text encoder hidden states.







<ExampleCodeBlock anchor="diffusers.AudioLDM2Pipeline.encode_prompt.example">

Example:

```python
>>> import scipy
>>> import torch
>>> from diffusers import AudioLDM2Pipeline

>>> repo_id = "cvssp/audioldm2"
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> # Get text embedding vectors
>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt(
...     prompt="Techno music with a strong, upbeat tempo and high melodic riffs",
...     device="cuda",
...     do_classifier_free_guidance=True,
... )

>>> # Pass text embeddings to pipeline for text-conditional audio generation
>>> audio = pipe(
...     prompt_embeds=prompt_embeds,
...     attention_mask=attention_mask,
...     generated_prompt_embeds=generated_prompt_embeds,
...     num_inference_steps=200,
...     audio_length_in_s=10.0,
... ).audios[0]

>>> # save generated audio sample
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
```

</ExampleCodeBlock>

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_language_model</name><anchor>diffusers.AudioLDM2Pipeline.generate_language_model</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py#L304</source><parameters>[{"name": "inputs_embeds", "val": ": Tensor = None"}, {"name": "max_new_tokens", "val": ": int = 8"}, {"name": "**model_kwargs", "val": ""}]</parameters><paramsdesc>- **inputs_embeds** (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`) --
  The sequence used as a prompt for the generation.
- **max_new_tokens** (`int`) --
  Number of new tokens to generate.
- **model_kwargs** (`Dict[str, Any]`, *optional*) --
  Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the `forward`
  function of the model.</paramsdesc><paramgroups>0</paramgroups><rettype>`inputs_embeds (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`)</rettype><retdesc>The sequence of generated hidden-states.</retdesc></docstring>


Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs.








</div></div>

## AudioLDM2ProjectionModel[[diffusers.AudioLDM2ProjectionModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioLDM2ProjectionModel</name><anchor>diffusers.AudioLDM2ProjectionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/modeling_audioldm2.py#L81</source><parameters>[{"name": "text_encoder_dim", "val": ""}, {"name": "text_encoder_1_dim", "val": ""}, {"name": "langauge_model_dim", "val": ""}, {"name": "use_learned_position_embedding", "val": " = None"}, {"name": "max_seq_length", "val": " = None"}]</parameters><paramsdesc>- **text_encoder_dim** (`int`) --
  Dimensionality of the text embeddings from the first text encoder (CLAP).
- **text_encoder_1_dim** (`int`) --
  Dimensionality of the text embeddings from the second text encoder (T5 or VITS).
- **langauge_model_dim** (`int`) --
  Dimensionality of the text embeddings from the language model (GPT2).</paramsdesc><paramgroups>0</paramgroups></docstring>

A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned
embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with
`_1` refers to that corresponding to the second text encoder. Otherwise, it is from the first.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AudioLDM2ProjectionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/modeling_audioldm2.py#L125</source><parameters>[{"name": "hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "hidden_states_1", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "attention_mask_1", "val": ": typing.Optional[torch.LongTensor] = None"}]</parameters></docstring>


</div></div>

## AudioLDM2UNet2DConditionModel[[diffusers.AudioLDM2UNet2DConditionModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioLDM2UNet2DConditionModel</name><anchor>diffusers.AudioLDM2UNet2DConditionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/modeling_audioldm2.py#L166</source><parameters>[{"name": "sample_size", "val": ": typing.Optional[int] = None"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D')"}, {"name": "mid_block_type", "val": ": typing.Optional[str] = 'UNetMidBlock2DCrossAttn'"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D')"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": typing.Union[int, typing.Tuple[int]] = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": typing.Union[int, typing.Tuple[int]] = 1280"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int]] = 1"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "class_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "num_class_embeds", "val": ": typing.Optional[int] = None"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "time_embedding_type", "val": ": str = 'positional'"}, {"name": "time_embedding_dim", "val": ": typing.Optional[int] = None"}, {"name": "time_embedding_act_fn", "val": ": typing.Optional[str] = None"}, {"name": "timestep_post_act", "val": ": typing.Optional[str] = None"}, {"name": "time_cond_proj_dim", "val": ": typing.Optional[int] = None"}, {"name": "conv_in_kernel", "val": ": int = 3"}, {"name": "conv_out_kernel", "val": ": int = 3"}, {"name": "projection_class_embeddings_input_dim", "val": ": typing.Optional[int] = None"}, {"name": "class_embeddings_concat", "val": ": bool = False"}]</parameters><paramsdesc>- **sample_size** (`int` or `Tuple[int, int]`, *optional*, defaults to `None`) --
  Height and width of input/output sample.
- **in_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the input sample.
- **out_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the output.
- **flip_sin_to_cos** (`bool`, *optional*, defaults to `False`) --
  Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, *optional*, defaults to 0) -- The frequency shift to apply to the time embedding.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
  The tuple of downsample blocks to use.
- **mid_block_type** (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`) --
  Block type for middle of UNet, it can only be `UNetMidBlock2DCrossAttn` for AudioLDM2.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`) --
  The tuple of upsample blocks to use.
- **only_cross_attention** (`bool` or `Tuple[bool]`, *optional*, default to `False`) --
  Whether to include self-attention in the basic transformer blocks, see
  `BasicTransformerBlock`.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, *optional*, defaults to 2) -- The number of layers per block.
- **downsample_padding** (`int`, *optional*, defaults to 1) -- The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, *optional*, defaults to 1.0) -- The scale factor to use for the mid block.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) -- The number of groups to use for the normalization.
  If `None`, normalization and activation layers is skipped in post-processing.
- **norm_eps** (`float`, *optional*, defaults to 1e-5) -- The epsilon to use for the normalization.
- **cross_attention_dim** (`int` or `Tuple[int]`, *optional*, defaults to 1280) --
  The dimension of the cross attention features.
- **transformer_layers_per_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
  The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
  `~models.unet_2d_blocks.CrossAttnDownBlock2D`, `~models.unet_2d_blocks.CrossAttnUpBlock2D`,
  `~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`.
- **attention_head_dim** (`int`, *optional*, defaults to 8) -- The dimension of the attention heads.
- **num_attention_heads** (`int`, *optional*) --
  The number of attention heads. If not defined, defaults to `attention_head_dim`
- **resnet_time_scale_shift** (`str`, *optional*, defaults to `"default"`) -- Time scale shift config
  for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **class_embed_type** (`str`, *optional*, defaults to `None`) --
  The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`,
  `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`.
- **num_class_embeds** (`int`, *optional*, defaults to `None`) --
  Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing
  class conditioning with `class_embed_type` equal to `None`.
- **time_embedding_type** (`str`, *optional*, defaults to `positional`) --
  The type of position embedding to use for timesteps. Choose from `positional` or `fourier`.
- **time_embedding_dim** (`int`, *optional*, defaults to `None`) --
  An optional override for the dimension of the projected time embedding.
- **time_embedding_act_fn** (`str`, *optional*, defaults to `None`) --
  Optional activation function to use only once on the time embeddings before they are passed to the rest of
  the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`.
- **timestep_post_act** (`str`, *optional*, defaults to `None`) --
  The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`.
- **time_cond_proj_dim** (`int`, *optional*, defaults to `None`) --
  The dimension of `cond_proj` layer in the timestep embedding.
- **conv_in_kernel** (`int`, *optional*, default to `3`) -- The kernel size of `conv_in` layer.
- **conv_out_kernel** (`int`, *optional*, default to `3`) -- The kernel size of `conv_out` layer.
- **projection_class_embeddings_input_dim** (`int`, *optional*) -- The dimension of the `class_labels` input when
  `class_embed_type="projection"`. Required when `class_embed_type="projection"`.
- **class_embeddings_concat** (`bool`, *optional*, defaults to `False`) -- Whether to concatenate the time
  embeddings with the class embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
shaped output. Compared to the vanilla [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel), this variant optionally includes an additional
self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up
to two cross-attention embeddings, `encoder_hidden_states` and `encoder_hidden_states_1`.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AudioLDM2UNet2DConditionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audioldm2/modeling_audioldm2.py#L675</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "encoder_hidden_states_1", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_attention_mask_1", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch, channel, height, width)`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
- **encoder_attention_mask** (`torch.Tensor`) --
  A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If
  `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias,
  which adds large negative values to the attention scores corresponding to "discard" tokens.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
  tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor`.
- **encoder_hidden_states_1** (`torch.Tensor`, *optional*) --
  A second set of encoder hidden states with shape `(batch, sequence_length_2, feature_dim_2)`. Can be
  used to condition the model on a different set of embeddings to `encoder_hidden_states`.
- **encoder_attention_mask_1** (`torch.Tensor`, *optional*) --
  A cross-attention mask of shape `(batch, sequence_length_2)` is applied to `encoder_hidden_states_1`.
  If `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias,
  which adds large negative values to the attention scores corresponding to "discard" tokens.</paramsdesc><paramgroups>0</paramgroups><rettype>[UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) is returned,
otherwise a `tuple` is returned where the first element is the sample tensor.</retdesc></docstring>

The [AudioLDM2UNet2DConditionModel](/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2UNet2DConditionModel) forward method.








</div></div>

## AudioPipelineOutput[[diffusers.AudioPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AudioPipelineOutput</name><anchor>diffusers.AudioPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L132</source><parameters>[{"name": "audios", "val": ": ndarray"}]</parameters><paramsdesc>- **audios** (`np.ndarray`) --
  List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for audio pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/audioldm2.md" />

### Cogview3
https://huggingface.co/docs/diffusers/main/api/pipelines/cogview3.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# CogView3Plus

[CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion](https://huggingface.co/papers/2403.05121) from Tsinghua University & ZhipuAI, by Wendi Zheng, Jiayan Teng, Zhuoyi Yang, Weihan Wang, Jidong Chen, Xiaotao Gu, Yuxiao Dong, Ming Ding, Jie Tang.

The abstract from the paper is:

*Recent advancements in text-to-image generative systems have been largely driven by diffusion models. However, single-stage text-to-image diffusion models still face challenges, in terms of computational efficiency and the refinement of image details. To tackle the issue, we propose CogView3, an innovative cascaded framework that enhances the performance of text-to-image diffusion. CogView3 is the first model implementing relay diffusion in the realm of text-to-image generation, executing the task by first creating low-resolution images and subsequently applying relay-based super-resolution. This methodology not only results in competitive text-to-image outputs but also greatly reduces both training and inference costs. Our experimental results demonstrate that CogView3 outperforms SDXL, the current state-of-the-art open-source text-to-image diffusion model, by 77.0% in human evaluations, all while requiring only about 1/2 of the inference time. The distilled variant of CogView3 achieves comparable performance while only utilizing 1/10 of the inference time by SDXL.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The original codebase can be found [here](https://huggingface.co/THUDM). The original weights can be found under [hf.co/THUDM](https://huggingface.co/THUDM).

## CogView3PlusPipeline[[diffusers.CogView3PlusPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogView3PlusPipeline</name><anchor>diffusers.CogView3PlusPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview3/pipeline_cogview3plus.py#L118</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": CogView3PlusTransformer2DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim_cogvideox.CogVideoXDDIMScheduler, diffusers.schedulers.scheduling_dpm_cogvideox.CogVideoXDPMScheduler]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. CogView3Plus uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([CogView3PlusTransformer2DModel](/docs/diffusers/main/en/api/models/cogview3plus_transformer2d#diffusers.CogView3PlusTransformer2DModel)) --
  A text conditioned `CogView3PlusTransformer2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using CogView3Plus.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.CogView3PlusPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview3/pipeline_cogview3plus.py#L407</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 224"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. If not provided, it is set to 1024.
- **width** (`int`, *optional*, defaults to self.transformer.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. If not provided it is set to 1024.
- **num_inference_steps** (`int`, *optional*, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to `1`) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `224`) --
  Maximum sequence length in encoded prompt. Can be set to other values but may lead to poorer results.</paramsdesc><paramgroups>0</paramgroups><rettype>[CogView3PipelineOutput](/docs/diffusers/main/en/api/pipelines/cogview3#diffusers.pipelines.cogview3.pipeline_output.CogView3PipelineOutput) or `tuple`</rettype><retdesc>[CogView3PipelineOutput](/docs/diffusers/main/en/api/pipelines/cogview3#diffusers.pipelines.cogview3.pipeline_output.CogView3PipelineOutput) if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.CogView3PlusPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import CogView3PlusPipeline

>>> pipe = CogView3PlusPipeline.from_pretrained("THUDM/CogView3-Plus-3B", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.CogView3PlusPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview3/pipeline_cogview3plus.py#L210</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 224"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of images that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **max_sequence_length** (`int`, defaults to `224`) --
  Maximum sequence length in encoded prompt. Can be set to other values but may lead to poorer results.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## CogView3PipelineOutput[[diffusers.pipelines.cogview3.pipeline_output.CogView3PipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.cogview3.pipeline_output.CogView3PipelineOutput</name><anchor>diffusers.pipelines.cogview3.pipeline_output.CogView3PipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/cogview3/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for CogView3 pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/cogview3.md" />

### ControlNet-XS with Stable Diffusion XL
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnetxs_sdxl.md

# ControlNet-XS with Stable Diffusion XL

ControlNet-XS was introduced in [ControlNet-XS](https://vislearn.github.io/ControlNet-XS/) by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the [original ControlNet](https://huggingface.co/papers/2302.05543) can be made much smaller and still produce good results.

Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster ([see benchmark](https://github.com/UmerHA/controlnet-xs-benchmark/blob/main/Speed%20Benchmark.ipynb)) and uses ~45% less memory.

Here's the overview from the [project page](https://vislearn.github.io/ControlNet-XS/):

*With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license.*

This model was contributed by [UmerHA](https://twitter.com/UmerHAdil). ❤️

> [!WARNING]
> 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) and leave us feedback on how we can improve!

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionXLControlNetXSPipeline[[diffusers.StableDiffusionXLControlNetXSPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLControlNetXSPipeline</name><anchor>diffusers.StableDiffusionXLControlNetXSPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs_sd_xl.py#L116</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.controlnets.controlnet_xs.UNetControlNetXSModel]"}, {"name": "controlnet", "val": ": ControlNetXSAdapter"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_encoder_2** ([CLIPTextModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModelWithProjection)) --
  Second frozen text-encoder
  ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **tokenizer_2** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetControlNetXSModel to denoise the encoded image latents.
- **controlnet** (`ControlNetXSAdapter`) --
  A `ControlNetXSAdapter` to be used in combination with `unet` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings should always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark](https://github.com/ShieldMnt/invisible-watermark/) library to
  watermark output images. If not defined, it defaults to `True` if the package is installed; otherwise no
  watermarker is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet-XS guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [loaders.FromSingleFileMixin.from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLControlNetXSPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs_sd_xl.py#L729</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_guidance_start", "val": ": float = 0.0"}, {"name": "control_guidance_end", "val": ": float = 1.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
  and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, pooled text embeddings are generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
  weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`.
- **control_guidance_start** (`float`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` is
returned, otherwise a `tuple` is returned containing the output images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLControlNetXSPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionXLControlNetXSPipeline, ControlNetXSAdapter, AutoencoderKL
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
>>> negative_prompt = "low quality, bad quality, sketches"

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png"
... )

>>> # initialize the models and pipeline
>>> controlnet_conditioning_scale = 0.5
>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
>>> controlnet = ControlNetXSAdapter.from_pretrained(
...     "UmerHA/Testing-ConrolNetXS-SDXL-canny", torch_dtype=torch.float16
... )
>>> pipe = StableDiffusionXLControlNetXSPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> # get canny image
>>> image = np.array(image)
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)

>>> # generate image
>>> image = pipe(
...     prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLControlNetXSPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_xs/pipeline_controlnet_xs_sd_xl.py#L226</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnetxs_sdxl.md" />

### Framepack
https://huggingface.co/docs/diffusers/main/api/pipelines/framepack.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# Framepack

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[Packing Input Frame Context in Next-Frame Prediction Models for Video Generation](https://huggingface.co/papers/2504.12626) by Lvmin Zhang and Maneesh Agrawala.

*We present a neural network structure, FramePack, to train next-frame (or next-frame-section) prediction models for video generation. The FramePack compresses input frames to make the transformer context length a fixed number regardless of the video length. As a result, we are able to process a large number of frames using video diffusion with computation bottleneck similar to image diffusion. This also makes the training video batch sizes significantly higher (batch sizes become comparable to image diffusion training). We also propose an anti-drifting sampling method that generates frames in inverted temporal order with early-established endpoints to avoid exposure bias (error accumulation over iterations). Finally, we show that existing video diffusion models can be finetuned with FramePack, and their visual quality may be improved because the next-frame prediction supports more balanced diffusion schedulers with less extreme flow shift timesteps.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Available models

| Model name | Description |
|:---|:---|
- [`lllyasviel/FramePackI2V_HY`](https://huggingface.co/lllyasviel/FramePackI2V_HY) | Trained with the "inverted anti-drifting" strategy as described in the paper. Inference requires setting `sampling_type="inverted_anti_drifting"` when running the pipeline. |
- [`lllyasviel/FramePack_F1_I2V_HY_20250503`](https://huggingface.co/lllyasviel/FramePack_F1_I2V_HY_20250503) | Trained with a novel anti-drifting strategy but inference is performed in "vanilla" strategy as described in the paper. Inference requires setting `sampling_type="vanilla"` when running the pipeline. |

## Usage

Refer to the pipeline documentation for basic usage examples. The following section contains examples of offloading, different sampling methods, quantization, and more.

### First and last frame to video

The following example shows how to use Framepack with start and end image controls, using the inverted anti-drifiting sampling model.

```python
import torch
from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel
from diffusers.utils import export_to_video, load_image
from transformers import SiglipImageProcessor, SiglipVisionModel

transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained(
    "lllyasviel/FramePackI2V_HY", torch_dtype=torch.bfloat16
)
feature_extractor = SiglipImageProcessor.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="feature_extractor"
)
image_encoder = SiglipVisionModel.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16
)
pipe = HunyuanVideoFramepackPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    transformer=transformer,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
    torch_dtype=torch.float16,
)

# Enable memory optimizations
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()

prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."
first_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png"
)
last_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png"
)
output = pipe(
    image=first_image,
    last_image=last_image,
    prompt=prompt,
    height=512,
    width=512,
    num_frames=91,
    num_inference_steps=30,
    guidance_scale=9.0,
    generator=torch.Generator().manual_seed(0),
    sampling_type="inverted_anti_drifting",
).frames[0]
export_to_video(output, "output.mp4", fps=30)
```

### Vanilla sampling

The following example shows how to use Framepack with the F1 model trained with vanilla sampling but new regulation approach for anti-drifting.

```python
import torch
from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel
from diffusers.utils import export_to_video, load_image
from transformers import SiglipImageProcessor, SiglipVisionModel

transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained(
    "lllyasviel/FramePack_F1_I2V_HY_20250503", torch_dtype=torch.bfloat16
)
feature_extractor = SiglipImageProcessor.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="feature_extractor"
)
image_encoder = SiglipVisionModel.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16
)
pipe = HunyuanVideoFramepackPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    transformer=transformer,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
    torch_dtype=torch.float16,
)

# Enable memory optimizations
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()

image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
)
output = pipe(
    image=image,
    prompt="A penguin dancing in the snow",
    height=832,
    width=480,
    num_frames=91,
    num_inference_steps=30,
    guidance_scale=9.0,
    generator=torch.Generator().manual_seed(0),
    sampling_type="vanilla",
).frames[0]
export_to_video(output, "output.mp4", fps=30)
```

### Group offloading

Group offloading ([apply_group_offloading()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_group_offloading)) provides aggressive memory optimizations for offloading internal parts of any model to the CPU, with possibly no additional overhead to generation time. If you have very low VRAM available, this approach may be suitable for you depending on the amount of CPU RAM available.

```python
import torch
from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import SiglipImageProcessor, SiglipVisionModel

transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained(
    "lllyasviel/FramePack_F1_I2V_HY_20250503", torch_dtype=torch.bfloat16
)
feature_extractor = SiglipImageProcessor.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="feature_extractor"
)
image_encoder = SiglipVisionModel.from_pretrained(
    "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16
)
pipe = HunyuanVideoFramepackPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    transformer=transformer,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
    torch_dtype=torch.float16,
)

# Enable group offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
list(map(
    lambda x: apply_group_offloading(x, onload_device, offload_device, offload_type="leaf_level", use_stream=True, low_cpu_mem_usage=True),
    [pipe.text_encoder, pipe.text_encoder_2, pipe.transformer]
))
pipe.image_encoder.to(onload_device)
pipe.vae.to(onload_device)
pipe.vae.enable_tiling()

image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
)
output = pipe(
    image=image,
    prompt="A penguin dancing in the snow",
    height=832,
    width=480,
    num_frames=91,
    num_inference_steps=30,
    guidance_scale=9.0,
    generator=torch.Generator().manual_seed(0),
    sampling_type="vanilla",
).frames[0]
print(f"Max memory: {torch.cuda.max_memory_allocated() / 1024**3:.3f} GB")
export_to_video(output, "output.mp4", fps=30)
```

## HunyuanVideoFramepackPipeline[[diffusers.HunyuanVideoFramepackPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanVideoFramepackPipeline</name><anchor>diffusers.HunyuanVideoFramepackPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L243</source><parameters>[{"name": "text_encoder", "val": ": LlamaModel"}, {"name": "tokenizer", "val": ": LlamaTokenizerFast"}, {"name": "transformer", "val": ": HunyuanVideoFramepackTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLHunyuanVideo"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "text_encoder_2", "val": ": CLIPTextModel"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "image_encoder", "val": ": SiglipVisionModel"}, {"name": "feature_extractor", "val": ": SiglipImageProcessor"}]</parameters><paramsdesc>- **text_encoder** (`LlamaModel`) --
  [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers).
- **tokenizer** (`LlamaTokenizer`) --
  Tokenizer from [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers).
- **transformer** ([HunyuanVideoTransformer3DModel](/docs/diffusers/main/en/api/models/hunyuan_video_transformer_3d#diffusers.HunyuanVideoTransformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLHunyuanVideo](/docs/diffusers/main/en/api/models/autoencoder_kl_hunyuan_video#diffusers.AutoencoderKLHunyuanVideo)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder_2** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer_2** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using HunyuanVideo.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HunyuanVideoFramepackPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L641</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "last_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 720"}, {"name": "width", "val": ": int = 1280"}, {"name": "num_frames", "val": ": int = 129"}, {"name": "latent_window_size", "val": ": int = 9"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "image_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "last_image_latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "prompt_template", "val": ": typing.Dict[str, typing.Any] = {'template': '<|start_header_id|>system<|end_header_id|>\\n\\nDescribe the video by detailing the following aspects: 1. The main content and theme of the video.2. The color, shape, size, texture, quantity, text, and spatial relationships of the objects.3. Actions, events, behaviors temporal relationships, physical movement changes of the objects.4. background environment, light, style and atmosphere.5. camera angles, movements, and transitions used in the video:<|eot_id|><|start_header_id|>user<|end_header_id|>\\n\\n{}<|eot_id|>', 'crop_start': 95}"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "sampling_type", "val": ": FramepackSamplingType = <FramepackSamplingType.INVERTED_ANTI_DRIFTING: 'inverted_anti_drifting'>"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`) --
  The image to be used as the starting point for the video generation.
- **last_image** (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`, *optional*) --
  The optional last image to be used as the ending point for the video generation. This is useful for
  generating transitions between two images.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **height** (`int`, defaults to `720`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `1280`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `129`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **true_cfg_scale** (`float`, *optional*, defaults to 1.0) --
  When > 1.0 and a provided `negative_prompt`, enables true classifier-free guidance.
- **guidance_scale** (`float`, defaults to `6.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality. Note that the only available
  HunyuanVideo model is CFG-distilled, which means that traditional guidance between unconditional and
  conditional latent is not applied.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **image_latents** (`torch.Tensor`, *optional*) --
  Pre-encoded image latents. If not provided, the image will be encoded using the VAE.
- **last_image_latents** (`torch.Tensor`, *optional*) --
  Pre-encoded last image latents. If not provided, the last image will be encoded using the VAE.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `HunyuanVideoFramepackPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~HunyuanVideoFramepackPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `HunyuanVideoFramepackPipelineOutput` is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images and the second element is a list
of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw)
content.</retdesc></docstring>

The call function to the pipeline for generation.



Examples:
##### Image-to-Video

<ExampleCodeBlock anchor="diffusers.HunyuanVideoFramepackPipeline.__call__.example">

```python
>>> import torch
>>> from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel
>>> from diffusers.utils import export_to_video, load_image
>>> from transformers import SiglipImageProcessor, SiglipVisionModel

>>> transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained(
...     "lllyasviel/FramePackI2V_HY", torch_dtype=torch.bfloat16
... )
>>> feature_extractor = SiglipImageProcessor.from_pretrained(
...     "lllyasviel/flux_redux_bfl", subfolder="feature_extractor"
... )
>>> image_encoder = SiglipVisionModel.from_pretrained(
...     "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16
... )
>>> pipe = HunyuanVideoFramepackPipeline.from_pretrained(
...     "hunyuanvideo-community/HunyuanVideo",
...     transformer=transformer,
...     feature_extractor=feature_extractor,
...     image_encoder=image_encoder,
...     torch_dtype=torch.float16,
... )
>>> pipe.vae.enable_tiling()
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png"
... )
>>> output = pipe(
...     image=image,
...     prompt="A penguin dancing in the snow",
...     height=832,
...     width=480,
...     num_frames=91,
...     num_inference_steps=30,
...     guidance_scale=9.0,
...     generator=torch.Generator().manual_seed(0),
...     sampling_type="inverted_anti_drifting",
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=30)
```

</ExampleCodeBlock>

##### First and Last Image-to-Video

<ExampleCodeBlock anchor="diffusers.HunyuanVideoFramepackPipeline.__call__.example-2">

```python
>>> import torch
>>> from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel
>>> from diffusers.utils import export_to_video, load_image
>>> from transformers import SiglipImageProcessor, SiglipVisionModel

>>> transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained(
...     "lllyasviel/FramePackI2V_HY", torch_dtype=torch.bfloat16
... )
>>> feature_extractor = SiglipImageProcessor.from_pretrained(
...     "lllyasviel/flux_redux_bfl", subfolder="feature_extractor"
... )
>>> image_encoder = SiglipVisionModel.from_pretrained(
...     "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16
... )
>>> pipe = HunyuanVideoFramepackPipeline.from_pretrained(
...     "hunyuanvideo-community/HunyuanVideo",
...     transformer=transformer,
...     feature_extractor=feature_extractor,
...     image_encoder=image_encoder,
...     torch_dtype=torch.float16,
... )
>>> pipe.to("cuda")

>>> prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."
>>> first_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png"
... )
>>> last_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png"
... )
>>> output = pipe(
...     image=first_image,
...     last_image=last_image,
...     prompt=prompt,
...     height=512,
...     width=512,
...     num_frames=91,
...     num_inference_steps=30,
...     guidance_scale=9.0,
...     generator=torch.Generator().manual_seed(0),
...     sampling_type="inverted_anti_drifting",
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=30)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.HunyuanVideoFramepackPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L581</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.HunyuanVideoFramepackPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L608</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.HunyuanVideoFramepackPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L568</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.HunyuanVideoFramepackPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py#L594</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div></div>

## HunyuanVideoPipelineOutput[[diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput</name><anchor>diffusers.pipelines.hunyuan_video.pipeline_output.HunyuanVideoPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuan_video/pipeline_output.py#L12</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for HunyuanVideo pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/framepack.md" />

### Text-to-Video Generation with AnimateDiff
https://huggingface.co/docs/diffusers/main/api/pipelines/animatediff.md

# Text-to-Video Generation with AnimateDiff

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

## Overview

[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://huggingface.co/papers/2307.04725) by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai.

The abstract of the paper is the following:

*With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at [this https URL](https://animatediff.github.io/).*

## Available Pipelines

| Pipeline | Tasks | Demo
|---|---|:---:|
| [AnimateDiffPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py) | *Text-to-Video Generation with AnimateDiff* |
| [AnimateDiffControlNetPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_controlnet.py) | *Controlled Video-to-Video Generation with AnimateDiff using ControlNet* |
| [AnimateDiffSparseControlNetPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sparsectrl.py) | *Controlled Video-to-Video Generation with AnimateDiff using SparseCtrl* |
| [AnimateDiffSDXLPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py) | *Video-to-Video Generation with AnimateDiff* |
| [AnimateDiffVideoToVideoPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py) | *Video-to-Video Generation with AnimateDiff* |
| [AnimateDiffVideoToVideoControlNetPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video_controlnet.py) | *Video-to-Video Generation with AnimateDiff using ControlNet* |

## Available checkpoints

Motion Adapter checkpoints can be found under [guoyww](https://huggingface.co/guoyww/). These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5.

## Usage example

### AnimateDiffPipeline

AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet.

The following example demonstrates how to use a *MotionAdapter* checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5.

```python
import torch
from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
# load SD 1.5 based finetuned model
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    clip_sample=False,
    timestep_spacing="linspace",
    beta_schedule="linear",
    steps_offset=1,
)
pipe.scheduler = scheduler

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt=(
        "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
        "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
        "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
        "golden hour, coastal landscape, seaside scenery"
    ),
    negative_prompt="bad quality, worse quality",
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=25,
    generator=torch.Generator("cpu").manual_seed(42),
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

Here are some sample outputs:

<table>
    <tr>
        <td><center>
        masterpiece, bestquality, sunset.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-realistic-doc.gif"
            alt="masterpiece, bestquality, sunset"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

> [!TIP]
> AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting `clip_sample=False` in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to `linear`.

### AnimateDiffControlNetPipeline

AnimateDiff can also be used with ControlNets ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide depth maps, the ControlNet model generates a video that'll preserve the spatial information from the depth maps. It is a more flexible and accurate way to control the video generation process.

```python
import torch
from diffusers import AnimateDiffControlNetPipeline, AutoencoderKL, ControlNetModel, MotionAdapter, LCMScheduler
from diffusers.utils import export_to_gif, load_video

# Additionally, you will need a preprocess videos before they can be used with the ControlNet
# HF maintains just the right package for it: `pip install controlnet_aux`
from controlnet_aux.processor import ZoeDetector

# Download controlnets from https://huggingface.co/lllyasviel/ControlNet-v1-1 to use .from_single_file
# Download Diffusers-format controlnets, such as https://huggingface.co/lllyasviel/sd-controlnet-depth, to use .from_pretrained()
controlnet = ControlNetModel.from_single_file("control_v11f1p_sd15_depth.pth", torch_dtype=torch.float16)

# We use AnimateLCM for this example but one can use the original motion adapters as well (for example, https://huggingface.co/guoyww/animatediff-motion-adapter-v1-5-3)
motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM")

vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
pipe: AnimateDiffControlNetPipeline = AnimateDiffControlNetPipeline.from_pretrained(
    "SG161222/Realistic_Vision_V5.1_noVAE",
    motion_adapter=motion_adapter,
    controlnet=controlnet,
    vae=vae,
).to(device="cuda", dtype=torch.float16)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm-lora")
pipe.set_adapters(["lcm-lora"], [0.8])

depth_detector = ZoeDetector.from_pretrained("lllyasviel/Annotators").to("cuda")
video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif")
conditioning_frames = []

with pipe.progress_bar(total=len(video)) as progress_bar:
    for frame in video:
        conditioning_frames.append(depth_detector(frame))
        progress_bar.update()

prompt = "a panda, playing a guitar, sitting in a pink boat, in the ocean, mountains in background, realistic, high quality"
negative_prompt = "bad quality, worst quality"

video = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=len(video),
    num_inference_steps=10,
    guidance_scale=2.0,
    conditioning_frames=conditioning_frames,
    generator=torch.Generator().manual_seed(42),
).frames[0]

export_to_gif(video, "animatediff_controlnet.gif", fps=8)
```

Here are some sample outputs:

<table align="center">
    <tr>
      <th align="center">Source Video</th>
      <th align="center">Output Video</th>
    </tr>
    <tr>
        <td align="center">
          raccoon playing a guitar
          <br />
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif" alt="racoon playing a guitar" />
        </td>
        <td align="center">
          a panda, playing a guitar, sitting in a pink boat, in the ocean, mountains in background, realistic, high quality
          <br/>
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-controlnet-output.gif" alt="a panda, playing a guitar, sitting in a pink boat, in the ocean, mountains in background, realistic, high quality" />
        </td>
    </tr>
</table>

### AnimateDiffSparseControlNetPipeline

[SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://huggingface.co/papers/2311.16933) for achieving controlled generation in text-to-video diffusion models by Yuwei Guo, Ceyuan Yang, Anyi Rao, Maneesh Agrawala, Dahua Lin, and Bo Dai.

The abstract from the paper is:

*The development of text-to-video (T2V), i.e., generating videos with a given text prompt, has been significantly advanced in recent years. However, relying solely on text prompts often results in ambiguous frame composition due to spatial uncertainty. The research community thus leverages the dense structure signals, e.g., per-frame depth/edge sequences, to enhance controllability, whose collection accordingly increases the burden of inference. In this work, we present SparseCtrl to enable flexible structure control with temporally sparse signals, requiring only one or a few inputs, as shown in Figure 1. It incorporates an additional condition encoder to process these sparse signals while leaving the pre-trained T2V model untouched. The proposed approach is compatible with various modalities, including sketches, depth maps, and RGB images, providing more practical control for video generation and promoting applications such as storyboarding, depth rendering, keyframe animation, and interpolation. Extensive experiments demonstrate the generalization of SparseCtrl on both original and personalized T2V generators. Codes and models will be publicly available at [this https URL](https://guoyww.github.io/projects/SparseCtrl).*

SparseCtrl introduces the following checkpoints for controlled text-to-video generation:

- [SparseCtrl Scribble](https://huggingface.co/guoyww/animatediff-sparsectrl-scribble)
- [SparseCtrl RGB](https://huggingface.co/guoyww/animatediff-sparsectrl-rgb)

#### Using SparseCtrl Scribble

```python
import torch

from diffusers import AnimateDiffSparseControlNetPipeline
from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
from diffusers.schedulers import DPMSolverMultistepScheduler
from diffusers.utils import export_to_gif, load_image


model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
controlnet_id = "guoyww/animatediff-sparsectrl-scribble"
lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
vae_id = "stabilityai/sd-vae-ft-mse"
device = "cuda"

motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
scheduler = DPMSolverMultistepScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    beta_schedule="linear",
    algorithm_type="dpmsolver++",
    use_karras_sigmas=True,
)
pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
    model_id,
    motion_adapter=motion_adapter,
    controlnet=controlnet,
    vae=vae,
    scheduler=scheduler,
    torch_dtype=torch.float16,
).to(device)
pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")
pipe.fuse_lora(lora_scale=1.0)

prompt = "an aerial view of a cyberpunk city, night time, neon lights, masterpiece, high quality"
negative_prompt = "low quality, worst quality, letterboxed"

image_files = [
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-1.png",
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-2.png",
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-3.png"
]
condition_frame_indices = [0, 8, 15]
conditioning_frames = [load_image(img_file) for img_file in image_files]

video = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_inference_steps=25,
    conditioning_frames=conditioning_frames,
    controlnet_conditioning_scale=1.0,
    controlnet_frame_indices=condition_frame_indices,
    generator=torch.Generator().manual_seed(1337),
).frames[0]
export_to_gif(video, "output.gif")
```

Here are some sample outputs:

<table align="center">
    <tr>
        <center>
          <b>an aerial view of a cyberpunk city, night time, neon lights, masterpiece, high quality</b>
        </center>
    </tr>
    <tr>
        <td>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-1.png" alt="scribble-1" />
          </center>
        </td>
        <td>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-2.png" alt="scribble-2" />
          </center>
        </td>
        <td>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-3.png" alt="scribble-3" />
          </center>
        </td>
    </tr>
    <tr>
        <td colspan=3>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-sparsectrl-scribble-results.gif" alt="an aerial view of a cyberpunk city, night time, neon lights, masterpiece, high quality" />
          </center>
        </td>
    </tr>
</table>

#### Using SparseCtrl RGB

```python
import torch

from diffusers import AnimateDiffSparseControlNetPipeline
from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
from diffusers.schedulers import DPMSolverMultistepScheduler
from diffusers.utils import export_to_gif, load_image


model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
controlnet_id = "guoyww/animatediff-sparsectrl-rgb"
lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
vae_id = "stabilityai/sd-vae-ft-mse"
device = "cuda"

motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
scheduler = DPMSolverMultistepScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    beta_schedule="linear",
    algorithm_type="dpmsolver++",
    use_karras_sigmas=True,
)
pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
    model_id,
    motion_adapter=motion_adapter,
    controlnet=controlnet,
    vae=vae,
    scheduler=scheduler,
    torch_dtype=torch.float16,
).to(device)
pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-firework.png")

video = pipe(
    prompt="closeup face photo of man in black clothes, night city street, bokeh, fireworks in background",
    negative_prompt="low quality, worst quality",
    num_inference_steps=25,
    conditioning_frames=image,
    controlnet_frame_indices=[0],
    controlnet_conditioning_scale=1.0,
    generator=torch.Generator().manual_seed(42),
).frames[0]
export_to_gif(video, "output.gif")
```

Here are some sample outputs:

<table align="center">
    <tr>
        <center>
          <b>closeup face photo of man in black clothes, night city street, bokeh, fireworks in background</b>
        </center>
    </tr>
    <tr>
        <td>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-firework.png" alt="closeup face photo of man in black clothes, night city street, bokeh, fireworks in background" />
          </center>
        </td>
        <td>
          <center>
            <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-sparsectrl-rgb-result.gif" alt="closeup face photo of man in black clothes, night city street, bokeh, fireworks in background" />
          </center>
        </td>
    </tr>
</table>

### AnimateDiffSDXLPipeline

AnimateDiff can also be used with SDXL models. This is currently an experimental feature as only a beta release of the motion adapter checkpoint is available.

```python
import torch
from diffusers.models import MotionAdapter
from diffusers import AnimateDiffSDXLPipeline, DDIMScheduler
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-sdxl-beta", torch_dtype=torch.float16)

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    clip_sample=False,
    timestep_spacing="linspace",
    beta_schedule="linear",
    steps_offset=1,
)
pipe = AnimateDiffSDXLPipeline.from_pretrained(
    model_id,
    motion_adapter=adapter,
    scheduler=scheduler,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()

output = pipe(
    prompt="a panda surfing in the ocean, realistic, high quality",
    negative_prompt="low quality, worst quality",
    num_inference_steps=20,
    guidance_scale=8,
    width=1024,
    height=1024,
    num_frames=16,
)

frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

### AnimateDiffVideoToVideoPipeline

AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities.

```python
import imageio
import requests
import torch
from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif
from io import BytesIO
from PIL import Image

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
# load SD 1.5 based finetuned model
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    clip_sample=False,
    timestep_spacing="linspace",
    beta_schedule="linear",
    steps_offset=1,
)
pipe.scheduler = scheduler

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

# helper function to load videos
def load_video(file_path: str):
    images = []

    if file_path.startswith(('http://', 'https://')):
        # If the file_path is a URL
        response = requests.get(file_path)
        response.raise_for_status()
        content = BytesIO(response.content)
        vid = imageio.get_reader(content)
    else:
        # Assuming it's a local file path
        vid = imageio.get_reader(file_path)

    for frame in vid:
        pil_image = Image.fromarray(frame)
        images.append(pil_image)

    return images

video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif")

output = pipe(
    video = video,
    prompt="panda playing a guitar, on a boat, in the ocean, high quality",
    negative_prompt="bad quality, worse quality",
    guidance_scale=7.5,
    num_inference_steps=25,
    strength=0.5,
    generator=torch.Generator("cpu").manual_seed(42),
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

Here are some sample outputs:

<table>
    <tr>
      <th align=center>Source Video</th>
      <th align=center>Output Video</th>
    </tr>
    <tr>
        <td align=center>
          raccoon playing a guitar
          <br />
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif"
              alt="racoon playing a guitar"
              style="width: 300px;" />
        </td>
        <td align=center>
          panda playing a guitar
          <br/>
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-output-1.gif"
              alt="panda playing a guitar"
              style="width: 300px;" />
        </td>
    </tr>
    <tr>
        <td align=center>
          closeup of margot robbie, fireworks in the background, high quality
          <br />
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-2.gif"
              alt="closeup of margot robbie, fireworks in the background, high quality"
              style="width: 300px;" />
        </td>
        <td align=center>
          closeup of tony stark, robert downey jr, fireworks
          <br/>
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-output-2.gif"
              alt="closeup of tony stark, robert downey jr, fireworks"
              style="width: 300px;" />
        </td>
    </tr>
</table>



### AnimateDiffVideoToVideoControlNetPipeline

AnimateDiff can be used together with ControlNets to enhance video-to-video generation by allowing for precise control over the output. ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala, and allows you to condition Stable Diffusion with an additional control image to ensure that the spatial information is preserved throughout the video. 

This pipeline allows you to condition your generation both on the original video and on a sequence of control images.

```python
import torch
from PIL import Image
from tqdm.auto import tqdm

from controlnet_aux.processor import OpenposeDetector
from diffusers import AnimateDiffVideoToVideoControlNetPipeline
from diffusers.utils import export_to_gif, load_video
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter, LCMScheduler

# Load the ControlNet
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
# Load the motion adapter
motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM")
# Load SD 1.5 based finetuned model
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
pipe = AnimateDiffVideoToVideoControlNetPipeline.from_pretrained(
    "SG161222/Realistic_Vision_V5.1_noVAE",
    motion_adapter=motion_adapter,
    controlnet=controlnet,
    vae=vae,
).to(device="cuda", dtype=torch.float16)

# Enable LCM to speed up inference
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm-lora")
pipe.set_adapters(["lcm-lora"], [0.8])

video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/dance.gif")
video = [frame.convert("RGB") for frame in video]

prompt = "astronaut in space, dancing"
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"

# Create controlnet preprocessor
open_pose = OpenposeDetector.from_pretrained("lllyasviel/Annotators").to("cuda")

# Preprocess controlnet images
conditioning_frames = []
for frame in tqdm(video):
    conditioning_frames.append(open_pose(frame))

strength = 0.8
with torch.inference_mode():
    video = pipe(
        video=video,
        prompt=prompt,
        negative_prompt=negative_prompt,
        num_inference_steps=10,
        guidance_scale=2.0,
        controlnet_conditioning_scale=0.75,
        conditioning_frames=conditioning_frames,
        strength=strength,
        generator=torch.Generator().manual_seed(42),
    ).frames[0]

video = [frame.resize(conditioning_frames[0].size) for frame in video]
export_to_gif(video, f"animatediff_vid2vid_controlnet.gif", fps=8)
```

Here are some sample outputs:

<table align="center">
    <tr>
      <th align="center">Source Video</th>
      <th align="center">Output Video</th>
    </tr>
    <tr>
        <td align="center">
          anime girl, dancing
          <br />
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/dance.gif" alt="anime girl, dancing" />
        </td>
        <td align="center">
          astronaut in space, dancing
          <br/>
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff_vid2vid_controlnet.gif" alt="astronaut in space, dancing" />
        </td>
    </tr>
</table>

**The lights and composition were transferred from the Source Video.**

## Using Motion LoRAs

Motion LoRAs are a collection of LoRAs that work with the `guoyww/animatediff-motion-adapter-v1-5-2` checkpoint. These LoRAs are responsible for adding specific types of motion to the animations.

```python
import torch
from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
# load SD 1.5 based finetuned model
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
pipe.load_lora_weights(
    "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out"
)

scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    clip_sample=False,
    beta_schedule="linear",
    timestep_spacing="linspace",
    steps_offset=1,
)
pipe.scheduler = scheduler

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt=(
        "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
        "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
        "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
        "golden hour, coastal landscape, seaside scenery"
    ),
    negative_prompt="bad quality, worse quality",
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=25,
    generator=torch.Generator("cpu").manual_seed(42),
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

<table>
    <tr>
        <td><center>
        masterpiece, bestquality, sunset.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-zoom-out-lora.gif"
            alt="masterpiece, bestquality, sunset"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

## Using Motion LoRAs with PEFT

You can also leverage the [PEFT](https://github.com/huggingface/peft) backend to combine Motion LoRA's and create more complex animations.

First install PEFT with

```shell
pip install peft
```

Then you can use the following code to combine Motion LoRAs.

```python
import torch
from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)
# load SD 1.5 based finetuned model
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16)

pipe.load_lora_weights(
    "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out",
)
pipe.load_lora_weights(
    "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left",
)
pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0])

scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    clip_sample=False,
    timestep_spacing="linspace",
    beta_schedule="linear",
    steps_offset=1,
)
pipe.scheduler = scheduler

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt=(
        "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
        "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
        "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
        "golden hour, coastal landscape, seaside scenery"
    ),
    negative_prompt="bad quality, worse quality",
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=25,
    generator=torch.Generator("cpu").manual_seed(42),
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

<table>
    <tr>
        <td><center>
        masterpiece, bestquality, sunset.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-zoom-out-pan-left-lora.gif"
            alt="masterpiece, bestquality, sunset"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

## Using FreeInit

[FreeInit: Bridging Initialization Gap in Video Diffusion Models](https://huggingface.co/papers/2312.07537) by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu.

FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper.

The following example demonstrates the usage of FreeInit.

```python
import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda")
pipe.scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    beta_schedule="linear",
    clip_sample=False,
    timestep_spacing="linspace",
    steps_offset=1
)

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()

# enable FreeInit
# Refer to the enable_free_init documentation for a full list of configurable parameters
pipe.enable_free_init(method="butterworth", use_fast_sampling=True)

# run inference
output = pipe(
    prompt="a panda playing a guitar, on a boat, in the ocean, high quality",
    negative_prompt="bad quality, worse quality",
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=20,
    generator=torch.Generator("cpu").manual_seed(666),
)

# disable FreeInit
pipe.disable_free_init()

frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```

> [!WARNING]
> FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the `num_iters` parameter that is set when enabling it. Setting the `use_fast_sampling` parameter to `True` can improve the overall performance (at the cost of lower quality compared to when `use_fast_sampling=False` but still better results than vanilla video generation models).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

<table>
    <tr>
      <th align=center>Without FreeInit enabled</th>
      <th align=center>With FreeInit enabled</th>
    </tr>
    <tr>
        <td align=center>
          panda playing a guitar
          <br />
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-no-freeinit.gif"
              alt="panda playing a guitar"
              style="width: 300px;" />
        </td>
        <td align=center>
          panda playing a guitar
          <br/>
          <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-freeinit.gif"
              alt="panda playing a guitar"
              style="width: 300px;" />
        </td>
    </tr>
</table>

## Using AnimateLCM

[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora) that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors.

```python
import torch
from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM")
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")

pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora")

pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution",
    negative_prompt="bad quality, worse quality, low resolution",
    num_frames=16,
    guidance_scale=1.5,
    num_inference_steps=6,
    generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm.gif")
```

<table>
    <tr>
        <td><center>
        A space rocket, 4K.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatelcm-output.gif"
            alt="A space rocket, 4K"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

AnimateLCM is also compatible with existing [Motion LoRAs](https://huggingface.co/collections/dn6/animatediff-motion-loras-654cb8ad732b9e3cf4d3c17e).

```python
import torch
from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM")
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")

pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora")
pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up")

pipe.set_adapters(["lcm-lora", "tilt-up"], [1.0, 0.8])
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution",
    negative_prompt="bad quality, worse quality, low resolution",
    num_frames=16,
    guidance_scale=1.5,
    num_inference_steps=6,
    generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm-motion-lora.gif")
```

<table>
    <tr>
        <td><center>
        A space rocket, 4K.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatelcm-motion-lora.gif"
            alt="A space rocket, 4K"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

## Using FreeNoise

[FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling](https://huggingface.co/papers/2310.15169) by Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu.

FreeNoise is a sampling mechanism that can generate longer videos with short-video generation models by employing noise-rescheduling, temporal attention over sliding windows, and weighted averaging of latent frames. It also can be used with multiple prompts to allow for interpolated video generations. More details are available in the paper.

The currently supported AnimateDiff pipelines that can be used with FreeNoise are:
- [AnimateDiffPipeline](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.AnimateDiffPipeline)
- [AnimateDiffControlNetPipeline](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.AnimateDiffControlNetPipeline)
- [AnimateDiffVideoToVideoPipeline](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.AnimateDiffVideoToVideoPipeline)
- [AnimateDiffVideoToVideoControlNetPipeline](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.AnimateDiffVideoToVideoControlNetPipeline)

In order to use FreeNoise, a single line needs to be added to the inference code after loading your pipelines.

```diff
+ pipe.enable_free_noise()
```

After this, either a single prompt could be used, or multiple prompts can be passed as a dictionary of integer-string pairs. The integer keys of the dictionary correspond to the frame index at which the influence of that prompt would be maximum. Each frame index should map to a single string prompt. The prompts for intermediate frame indices, that are not passed in the dictionary, are created by interpolating between the frame prompts that are passed. By default, simple linear interpolation is used. However, you can customize this behaviour with a callback to the `prompt_interpolation_callback` parameter when enabling FreeNoise.

Full example:

```python
import torch
from diffusers import AutoencoderKL, AnimateDiffPipeline, LCMScheduler, MotionAdapter
from diffusers.utils import export_to_video, load_image

# Load pipeline
dtype = torch.float16
motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=dtype)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=dtype)

pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=motion_adapter, vae=vae, torch_dtype=dtype)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")

pipe.load_lora_weights(
    "wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm_lora"
)
pipe.set_adapters(["lcm_lora"], [0.8])

# Enable FreeNoise for long prompt generation
pipe.enable_free_noise(context_length=16, context_stride=4)
pipe.to("cuda")

# Can be a single prompt, or a dictionary with frame timesteps
prompt = {
    0: "A caterpillar on a leaf, high quality, photorealistic",
    40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic",
    80: "A cocoon on a leaf, flowers in the background, photorealistic",
    120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic",
    160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic",
    200: "A beautiful butterfly, flying away in a forest, photorealistic",
    240: "A cyberpunk butterfly, neon lights, glowing",
}
negative_prompt = "bad quality, worst quality, jpeg artifacts"

# Run inference
output = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=256,
    guidance_scale=2.5,
    num_inference_steps=10,
    generator=torch.Generator("cpu").manual_seed(0),
)

# Save video
frames = output.frames[0]
export_to_video(frames, "output.mp4", fps=16)
```

### FreeNoise memory savings

Since FreeNoise processes multiple frames together, there are parts in the modeling where the memory required exceeds that available on normal consumer GPUs. The main memory bottlenecks that we identified are spatial and temporal attention blocks, upsampling and downsampling blocks, resnet blocks and feed-forward layers. Since most of these blocks operate effectively only on the channel/embedding dimension, one can perform chunked inference across the batch dimensions. The batch dimension in AnimateDiff are either spatial (`[B x F, H x W, C]`) or temporal (`B x H x W, F, C`) in nature (note that it may seem counter-intuitive, but the batch dimension here are correct, because spatial blocks process across the `B x F` dimension while the temporal blocks process across the `B x H x W` dimension). We introduce a `SplitInferenceModule` that makes it easier to chunk across any dimension and perform inference. This saves a lot of memory but comes at the cost of requiring more time for inference.

```diff
# Load pipeline and adapters
# ...
+ pipe.enable_free_noise_split_inference()
+ pipe.unet.enable_forward_chunking(16)
```

The call to `pipe.enable_free_noise_split_inference` method accepts two parameters: `spatial_split_size` (defaults to `256`) and `temporal_split_size` (defaults to `16`). These can be configured based on how much VRAM you have available. A lower split size results in lower memory usage but slower inference, whereas a larger split size results in faster inference at the cost of more memory.

## Using `from_single_file` with the MotionAdapter

`diffusers>=0.30.0` supports loading the AnimateDiff checkpoints into the `MotionAdapter` in their original format via `from_single_file`

```python
from diffusers import MotionAdapter

ckpt_path = "https://huggingface.co/Lightricks/LongAnimateDiff/blob/main/lt_long_mm_32_frames.ckpt"

adapter = MotionAdapter.from_single_file(ckpt_path, torch_dtype=torch.float16)
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter)
```

## AnimateDiffPipeline[[diffusers.AnimateDiffPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffPipeline</name><anchor>diffusers.AnimateDiffPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py#L78</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py#L573</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = 16"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "decode_chunk_size", "val": ": int = 16"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **decode_chunk_size** (`int`, defaults to `16`) --
  The number of frames to decode at a time when calling `decode_latents` method.</paramsdesc><paramgroups>0</paramgroups><rettype>[AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AnimateDiffPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
>>> from diffusers.utils import export_to_gif

>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter)
>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False)
>>> output = pipe(prompt="A corgi walking in the park")
>>> frames = output.frames[0]
>>> export_to_gif(frames, "animation.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py#L156</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AnimateDiffControlNetPipeline[[diffusers.AnimateDiffControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffControlNetPipeline</name><anchor>diffusers.AnimateDiffControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_controlnet.py#L120</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_controlnet.py#L721</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_frames", "val": ": typing.Optional[int] = 16"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "conditioning_frames", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "decode_chunk_size", "val": ": int = 16"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **conditioning_frames** (`List[PipelineImageInput]`, *optional*) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If multiple
  ControlNets are specified, images must be passed as a list such that each element of the list can be
  correctly batched for input to a single ControlNet.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



Examples:






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_controlnet.py#L199</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AnimateDiffSparseControlNetPipeline[[diffusers.AnimateDiffSparseControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffSparseControlNetPipeline</name><anchor>diffusers.AnimateDiffSparseControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sparsectrl.py#L132</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "controlnet", "val": ": SparseControlNetModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for controlled text-to-video generation using the method described in [SparseCtrl: Adding Sparse Controls
to Text-to-Video Diffusion Models](https://huggingface.co/papers/2311.16933).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffSparseControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sparsectrl.py#L712</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_frames", "val": ": int = 16"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "conditioning_frames", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "output_type", "val": ": str = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "controlnet_frame_indices", "val": ": typing.List[int] = [0]"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_frames** (`int`, *optional*, defaults to 16) --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **conditioning_frames** (`List[PipelineImageInput]`, *optional*) --
  The SparseControlNet input to provide guidance to the `unet` for generation.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TextToVideoSDPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video#diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput) instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **controlnet_frame_indices** (`List[int]`) --
  The indices where the conditioning frames must be applied for generation. Multiple frames can be
  provided to guide the model to generate similar structure outputs, where the `unet` can
  "fill-in-the-gaps" for interpolation videos, or a single frame could be provided for general expected
  structure. Must have the same length as `conditioning_frames`.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AnimateDiffSparseControlNetPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import AnimateDiffSparseControlNetPipeline
>>> from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
>>> from diffusers.schedulers import DPMSolverMultistepScheduler
>>> from diffusers.utils import export_to_gif, load_image

>>> model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
>>> motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
>>> controlnet_id = "guoyww/animatediff-sparsectrl-scribble"
>>> lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
>>> vae_id = "stabilityai/sd-vae-ft-mse"
>>> device = "cuda"

>>> motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
>>> controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
>>> vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
>>> scheduler = DPMSolverMultistepScheduler.from_pretrained(
...     model_id,
...     subfolder="scheduler",
...     beta_schedule="linear",
...     algorithm_type="dpmsolver++",
...     use_karras_sigmas=True,
... )
>>> pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
...     model_id,
...     motion_adapter=motion_adapter,
...     controlnet=controlnet,
...     vae=vae,
...     scheduler=scheduler,
...     torch_dtype=torch.float16,
... ).to(device)
>>> pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")
>>> pipe.fuse_lora(lora_scale=1.0)

>>> prompt = "an aerial view of a cyberpunk city, night time, neon lights, masterpiece, high quality"
>>> negative_prompt = "low quality, worst quality, letterboxed"

>>> image_files = [
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-1.png",
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-2.png",
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-3.png",
... ]
>>> condition_frame_indices = [0, 8, 15]
>>> conditioning_frames = [load_image(img_file) for img_file in image_files]

>>> video = pipe(
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     num_inference_steps=25,
...     conditioning_frames=conditioning_frames,
...     controlnet_conditioning_scale=1.0,
...     controlnet_frame_indices=condition_frame_indices,
...     generator=torch.Generator().manual_seed(1337),
... ).frames[0]
>>> export_to_gif(video, "output.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffSparseControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sparsectrl.py#L208</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AnimateDiffSDXLPipeline[[diffusers.AnimateDiffSDXLPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffSDXLPipeline</name><anchor>diffusers.AnimateDiffSDXLPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py#L210</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffSDXLPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py#L869</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_frames", "val": ": int = 16"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the video generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **num_frames** --
  The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
  amounts to 2 seconds of video.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated video. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated video. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality video at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower video quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the video generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the video generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. If not provided, embeddings are computed from the
  `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.AnimateDiffPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.AnimateDiffSDXLPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers.models import MotionAdapter
>>> from diffusers import AnimateDiffSDXLPipeline, DDIMScheduler
>>> from diffusers.utils import export_to_gif

>>> adapter = MotionAdapter.from_pretrained(
...     "a-r-r-o-w/animatediff-motion-adapter-sdxl-beta", torch_dtype=torch.float16
... )

>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0"
>>> scheduler = DDIMScheduler.from_pretrained(
...     model_id,
...     subfolder="scheduler",
...     clip_sample=False,
...     timestep_spacing="linspace",
...     beta_schedule="linear",
...     steps_offset=1,
... )
>>> pipe = AnimateDiffSDXLPipeline.from_pretrained(
...     model_id,
...     motion_adapter=adapter,
...     scheduler=scheduler,
...     torch_dtype=torch.float16,
...     variant="fp16",
... ).to("cuda")

>>> # enable memory savings
>>> pipe.enable_vae_slicing()
>>> pipe.enable_vae_tiling()

>>> output = pipe(
...     prompt="a panda surfing in the ocean, realistic, high quality",
...     negative_prompt="low quality, worst quality",
...     num_inference_steps=20,
...     guidance_scale=8,
...     width=1024,
...     height=1024,
...     num_frames=16,
... )

>>> frames = output.frames[0]
>>> export_to_gif(frames, "animation.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffSDXLPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py#L329</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_videos_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.AnimateDiffSDXLPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_sdxl.py#L804</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## AnimateDiffVideoToVideoPipeline[[diffusers.AnimateDiffVideoToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffVideoToVideoPipeline</name><anchor>diffusers.AnimateDiffVideoToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py#L181</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for video-to-video generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffVideoToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py#L746</source><parameters>[{"name": "video", "val": ": typing.List[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "enforce_inference_steps", "val": ": bool = False"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "decode_chunk_size", "val": ": int = 16"}]</parameters><paramsdesc>- **video** (`List[PipelineImageInput]`) --
  The input video to condition the generation on. Must be a list of images/frames of the video.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Higher strength leads to more differences between original video and generated video.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `AnimateDiffPipelineOutput` instead of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **decode_chunk_size** (`int`, defaults to `16`) --
  The number of frames to decode at a time when calling `decode_latents` method.</paramsdesc><paramgroups>0</paramgroups><rettype>[pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



Examples:






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffVideoToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py#L258</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AnimateDiffVideoToVideoControlNetPipeline[[diffusers.AnimateDiffVideoToVideoControlNetPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AnimateDiffVideoToVideoControlNetPipeline</name><anchor>diffusers.AnimateDiffVideoToVideoControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video_controlnet.py#L199</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel]"}, {"name": "motion_adapter", "val": ": MotionAdapter"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler]"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** (`CLIPTokenizer`) --
  A [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer) to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used to create a UNetMotionModel to denoise the encoded video latents.
- **motion_adapter** (`MotionAdapter`) --
  A `MotionAdapter` to be used in combination with `unet` to denoise the encoded video latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]` or `Tuple[ControlNetModel]` or `MultiControlNetModel`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for video-to-video generation with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.AnimateDiffVideoToVideoControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video_controlnet.py#L911</source><parameters>[{"name": "video", "val": ": typing.List[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "enforce_inference_steps", "val": ": bool = False"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "conditioning_frames", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "decode_chunk_size", "val": ": int = 16"}]</parameters><paramsdesc>- **video** (`List[PipelineImageInput]`) --
  The input video to condition the generation on. Must be a list of images/frames of the video.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated video.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated video.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Higher strength leads to more differences between original video and generated video.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`. Latents should be of shape
  `(batch_size, num_channel, num_frames, height, width)`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **conditioning_frames** (`List[PipelineImageInput]`, *optional*) --
  The ControlNet input condition to provide guidance to the `unet` for generation. If multiple
  ControlNets are specified, images must be passed as a list such that each element of the list can be
  correctly batched for input to a single ControlNet.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated video. Choose between `torch.Tensor`, `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `AnimateDiffPipelineOutput` instead of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **decode_chunk_size** (`int`, defaults to `16`) --
  The number of frames to decode at a time when calling `decode_latents` method.</paramsdesc><paramgroups>0</paramgroups><rettype>[pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.pipelines.animatediff.AnimateDiffPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for generation.



Examples:






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.AnimateDiffVideoToVideoControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video_controlnet.py#L289</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## AnimateDiffPipelineOutput[[diffusers.pipelines.animatediff.AnimateDiffPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput</name><anchor>diffusers.pipelines.animatediff.AnimateDiffPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_output.py#L12</source><parameters>[{"name": "frames", "val": ": typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]]"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for AnimateDiff pipelines.



PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
`(batch_size, num_frames, channels, height, width)`


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/animatediff.md" />

### Paint by Example
https://huggingface.co/docs/diffusers/main/api/pipelines/paint_by_example.md

# Paint by Example

[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://huggingface.co/papers/2211.13227) is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.

The abstract from the paper is:

*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*

The original codebase can be found at [Fantasy-Studio/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example), and you can try it out in a [demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example).

## Tips

Paint by Example is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint is warm-started from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to inpaint partly masked images conditioned on example and reference images.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## PaintByExamplePipeline[[diffusers.PaintByExamplePipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PaintByExamplePipeline</name><anchor>diffusers.PaintByExamplePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py#L158</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "image_encoder", "val": ": PaintByExampleImageEncoder"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler]"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = False"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.PaintByExamplePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py#L397</source><parameters>[{"name": "example_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image]"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image]"}, {"name": "mask_image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **example_image** (`torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`) --
  An example image to guide image generation.
- **image** (`torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`) --
  `Image` or tensor representing an image batch to be inpainted (parts of the image are masked out with
  `mask_image` and repainted according to `prompt`).
- **mask_image** (`torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]`) --
  `Image` or tensor representing an image batch to mask `image`. White pixels in the mask are repainted,
  while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a single channel
  (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, so the
  expected shape would be `(B, H, W, 1)`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.PaintByExamplePipeline.__call__.example">

Example:

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO
>>> from diffusers import PaintByExamplePipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = (
...     "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
... )
>>> mask_url = (
...     "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
... )
>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"

>>> init_image = download_image(img_url).resize((512, 512))
>>> mask_image = download_image(mask_url).resize((512, 512))
>>> example_image = download_image(example_url).resize((512, 512))

>>> pipe = PaintByExamplePipeline.from_pretrained(
...     "Fantasy-Studio/Paint-by-Example",
...     torch_dtype=torch.float16,
... )
>>> pipe = pipe.to("cuda")

>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
>>> image
```

</ExampleCodeBlock>






</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/paint_by_example.md" />

### ControlNet with Flux.1
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_flux.md

# ControlNet with Flux.1

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

FluxControlNetPipeline is an implementation of ControlNet for Flux.1.

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This controlnet code is implemented by [The InstantX Team](https://huggingface.co/InstantX). You can find pre-trained checkpoints for Flux-ControlNet in the table below:


| ControlNet type | Developer | Link |
| -------- | ---------- | ---- |
| Canny | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny) |
| Depth | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth) |
| Union | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union) |

XLabs ControlNets are also supported, which was contributed by the [XLabs team](https://huggingface.co/XLabs-AI).

| ControlNet type | Developer | Link |
| -------- | ---------- | ---- |
| Canny | [The XLabs Team](https://huggingface.co/XLabs-AI) | [Link](https://huggingface.co/XLabs-AI/flux-controlnet-canny-diffusers) |
| Depth | [The XLabs Team](https://huggingface.co/XLabs-AI) | [Link](https://huggingface.co/XLabs-AI/flux-controlnet-depth-diffusers) |
| HED | [The XLabs Team](https://huggingface.co/XLabs-AI) | [Link](https://huggingface.co/XLabs-AI/flux-controlnet-hed-diffusers) |


> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## FluxControlNetPipeline[[diffusers.FluxControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlNetPipeline</name><anchor>diffusers.FluxControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py#L177</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel, typing.List[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], diffusers.models.controlnets.controlnet_flux.FluxMultiControlNetModel]"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The Flux pipeline for text-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.FluxControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py#L677</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "true_cfg_scale", "val": ": float = 1.0"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_mode", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "negative_ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **control_mode** (`int` or `List[int]`,, *optional*, defaults to None) --
  The control mode when applying ControlNet-Union.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **negative_ip_adapter_image** --
  (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **negative_ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.FluxControlNetPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers.utils import load_image
>>> from diffusers import FluxControlNetPipeline
>>> from diffusers import FluxControlNetModel

>>> base_model = "black-forest-labs/FLUX.1-dev"
>>> controlnet_model = "InstantX/FLUX.1-dev-controlnet-canny"
>>> controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
>>> pipe = FluxControlNetPipeline.from_pretrained(
...     base_model, controlnet=controlnet, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")
>>> control_image = load_image("https://huggingface.co/InstantX/SD3-Controlnet-Canny/resolve/main/canny.jpg")
>>> prompt = "A girl in city, 25 years old, cool, futuristic"
>>> image = pipe(
...     prompt,
...     control_image=control_image,
...     control_guidance_start=0.2,
...     control_guidance_end=0.8,
...     controlnet_conditioning_scale=1.0,
...     num_inference_steps=28,
...     guidance_scale=3.5,
... ).images[0]
>>> image.save("flux.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.FluxControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py#L341</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

## FluxPipelineOutput[[diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput</name><anchor>diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_output.py#L12</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `torch.Tensor` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array or torch tensor of shape `(batch_size,
  height, width, num_channels)`. PIL images or numpy array present the denoised images of the diffusion
  pipeline. Torch tensors can represent either the denoised images or the intermediate latents ready to be
  passed to the decoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Flux image generation pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_flux.md" />

### Semantic Guidance
https://huggingface.co/docs/diffusers/main/api/pipelines/semantic_stable_diffusion.md

# Semantic Guidance

Semantic Guidance for Diffusion Models was proposed in [SEGA: Instructing Text-to-Image Models using Semantic Guidance](https://huggingface.co/papers/2301.12247) and provides strong semantic control over image generation.
Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition.

The abstract from the paper is:

*Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## SemanticStableDiffusionPipeline[[diffusers.SemanticStableDiffusionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SemanticStableDiffusionPipeline</name><anchor>diffusers.SemanticStableDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py#L28</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SemanticStableDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py#L223</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "editing_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "editing_prompt_embeddings", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "reverse_editing_direction", "val": ": typing.Union[bool, typing.List[bool], NoneType] = False"}, {"name": "edit_guidance_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = 5"}, {"name": "edit_warmup_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = 10"}, {"name": "edit_cooldown_steps", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "edit_threshold", "val": ": typing.Union[float, typing.List[float], NoneType] = 0.9"}, {"name": "edit_momentum_scale", "val": ": typing.Optional[float] = 0.1"}, {"name": "edit_mom_beta", "val": ": typing.Optional[float] = 0.4"}, {"name": "edit_weights", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "sem_guidance", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide image generation.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **editing_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting
  `editing_prompt = None`. Guidance direction of prompt should be specified via
  `reverse_editing_direction`.
- **editing_prompt_embeddings** (`torch.Tensor`, *optional*) --
  Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be
  specified via `reverse_editing_direction`.
- **reverse_editing_direction** (`bool` or `List[bool]`, *optional*, defaults to `False`) --
  Whether the corresponding prompt in `editing_prompt` should be increased or decreased.
- **edit_guidance_scale** (`float` or `List[float]`, *optional*, defaults to 5) --
  Guidance scale for semantic guidance. If provided as a list, values should correspond to
  `editing_prompt`.
- **edit_warmup_steps** (`float` or `List[float]`, *optional*, defaults to 10) --
  Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is
  calculated for those steps and applied once all warmup periods are over.
- **edit_cooldown_steps** (`float` or `List[float]`, *optional*, defaults to `None`) --
  Number of diffusion steps (for each prompt) after which semantic guidance is longer applied.
- **edit_threshold** (`float` or `List[float]`, *optional*, defaults to 0.9) --
  Threshold of semantic guidance.
- **edit_momentum_scale** (`float`, *optional*, defaults to 0.1) --
  Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0,
  momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than
  `sld_warmup_steps`). Momentum is only added to latent guidance once all warmup periods are finished.
- **edit_mom_beta** (`float`, *optional*, defaults to 0.4) --
  Defines how semantic guidance momentum builds up. `edit_mom_beta` indicates how much of the previous
  momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than
  `edit_warmup_steps`).
- **edit_weights** (`List[float]`, *optional*, defaults to `None`) --
  Indicates how much each individual concept should influence the overall guidance. If no weights are
  provided all concepts are applied equally.
- **sem_guidance** (`List[torch.Tensor]`, *optional*) --
  List of pre-generated guidance vectors to be applied at generation. Length of the list has to
  correspond to `num_inference_steps`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`,
`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput` is returned, otherwise a
`tuple` is returned where the first element is a list with the generated images and the second element
is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work"
(nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SemanticStableDiffusionPipeline.__call__.example">

Examples:

```py
>>> import torch
>>> from diffusers import SemanticStableDiffusionPipeline

>>> pipe = SemanticStableDiffusionPipeline.from_pretrained(
...     "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> out = pipe(
...     prompt="a photo of the face of a woman",
...     num_images_per_prompt=1,
...     guidance_scale=7,
...     editing_prompt=[
...         "smiling, smile",  # Concepts to apply
...         "glasses, wearing glasses",
...         "curls, wavy hair, curly hair",
...         "beard, full beard, mustache",
...     ],
...     reverse_editing_direction=[
...         False,
...         False,
...         False,
...         False,
...     ],  # Direction of guidance i.e. increase all concepts
...     edit_warmup_steps=[10, 10, 10, 10],  # Warmup period for each concept
...     edit_guidance_scale=[4, 5, 5, 5.4],  # Guidance scale for each concept
...     edit_threshold=[
...         0.99,
...         0.975,
...         0.925,
...         0.96,
...     ],  # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
...     edit_momentum_scale=0.3,  # Momentum scale that will be added to the latent guidance
...     edit_mom_beta=0.6,  # Momentum beta
...     edit_weights=[1, 1, 1, 1, 1],  # Weights of the individual concepts against each other
... )
>>> image = out.images[0]
```

</ExampleCodeBlock>






</div></div>

## SemanticStableDiffusionPipelineOutput[[diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/semantic_stable_diffusion.md" />

### Kandinsky 3
https://huggingface.co/docs/diffusers/main/api/pipelines/kandinsky3.md

# Kandinsky 3

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Kandinsky 3 is created by [Vladimir Arkhipkin](https://github.com/oriBetelgeuse),[Anastasia Maltseva](https://github.com/NastyaMittseva),[Igor Pavlov](https://github.com/boomb0om),[Andrei Filatov](https://github.com/anvilarth),[Arseniy Shakhmatov](https://github.com/cene555),[Andrey Kuznetsov](https://github.com/kuznetsoffandrey),[Denis Dimitrov](https://github.com/denndimitrov), [Zein Shaheen](https://github.com/zeinsh)

The description from it's GitHub page:

*Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.*

Its architecture includes 3 main components:
1. [FLAN-UL2](https://huggingface.co/google/flan-ul2), which is an encoder decoder model based on the T5 architecture.
2. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters.
3. Sber-MoVQGAN is a decoder proven to have superior results in image restoration.



The original codebase can be found at [ai-forever/Kandinsky-3](https://github.com/ai-forever/Kandinsky-3).

> [!TIP]
> Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.

> [!TIP]
> Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Kandinsky3Pipeline[[diffusers.Kandinsky3Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Kandinsky3Pipeline</name><anchor>diffusers.Kandinsky3Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3.py#L59</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": Kandinsky3UNet"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.Kandinsky3Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3.py#L334</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "guidance_scale", "val": ": float = 3.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = 1024"}, {"name": "width", "val": ": typing.Optional[int] = 1024"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "latents", "val": " = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **num_inference_steps** (`int`, *optional*, defaults to 25) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 3.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask. Must provide if passing `prompt_embeds` directly.
- **negative_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated negative attention mask. Must provide if passing `negative_prompt_embeds` directly.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.Kandinsky3Pipeline.__call__.example">

Examples:
```py
>>> from diffusers import AutoPipelineForText2Image
>>> import torch

>>> pipe = AutoPipelineForText2Image.from_pretrained(
...     "kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background."

>>> generator = torch.Generator(device="cpu").manual_seed(0)
>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```

</ExampleCodeBlock>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.Kandinsky3Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3.py#L91</source><parameters>[{"name": "prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": " = True"}, {"name": "num_images_per_prompt", "val": " = 1"}, {"name": "device", "val": " = None"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "_cut_context", "val": " = False"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask. Must provide if passing `prompt_embeds` directly.
- **negative_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated negative attention mask. Must provide if passing `negative_prompt_embeds` directly.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## Kandinsky3Img2ImgPipeline[[diffusers.Kandinsky3Img2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Kandinsky3Img2ImgPipeline</name><anchor>diffusers.Kandinsky3Img2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3_img2img.py#L56</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "unet", "val": ": Kandinsky3UNet"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "movq", "val": ": VQModel"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.Kandinsky3Img2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3_img2img.py#L400</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[torch.Tensor], typing.List[PIL.Image.Image]] = None"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_inference_steps", "val": ": int = 25"}, {"name": "guidance_scale", "val": ": float = 3.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, or tensor representing an image batch, that will be used as the starting point for the
  process.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 3.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask. Must provide if passing `prompt_embeds` directly.
- **negative_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated negative attention mask. Must provide if passing `negative_prompt_embeds` directly.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.Kandinsky3Img2ImgPipeline.__call__.example">

Examples:
```py
>>> from diffusers import AutoPipelineForImage2Image
>>> from diffusers.utils import load_image
>>> import torch

>>> pipe = AutoPipelineForImage2Image.from_pretrained(
...     "kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A painting of the inside of a subway train with tiny raccoons."
>>> image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png"
... )

>>> generator = torch.Generator(device="cpu").manual_seed(0)
>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.Kandinsky3Img2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/kandinsky3/pipeline_kandinsky3_img2img.py#L106</source><parameters>[{"name": "prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": " = True"}, {"name": "num_images_per_prompt", "val": " = 1"}, {"name": "device", "val": " = None"}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "_cut_context", "val": " = False"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.



device: (`torch.device`, *optional*):
torch device to place the resulting embeddings on
num_images_per_prompt (`int`, *optional*, defaults to 1):
number of images that should be generated per prompt
do_classifier_free_guidance (`bool`, *optional*, defaults to `True`):
whether to use classifier free guidance or not
negative_prompt (`str` or `List[str]`, *optional*):
The prompt or prompts not to guide the image generation. If not defined, one has to pass
`negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
prompt_embeds (`torch.Tensor`, *optional*):
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
provided, text embeddings will be generated from `prompt` input argument.
negative_prompt_embeds (`torch.Tensor`, *optional*):
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
argument.
attention_mask (`torch.Tensor`, *optional*):
Pre-generated attention mask. Must provide if passing `prompt_embeds` directly.
negative_attention_mask (`torch.Tensor`, *optional*):
Pre-generated negative attention mask. Must provide if passing `negative_prompt_embeds` directly.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kandinsky3.md" />

### Ltx Video
https://huggingface.co/docs/diffusers/main/api/pipelines/ltx_video.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

<div style="float: right;">
  <div class="flex flex-wrap space-x-1">
    <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener">
      <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
    </a>
    <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
  </div>
</div>

# LTX-Video

[LTX-Video](https://huggingface.co/Lightricks/LTX-Video) is a diffusion transformer designed for fast and real-time generation of high-resolution videos from text and images. The main feature of LTX-Video is the Video-VAE. The Video-VAE has a higher pixel to latent compression ratio (1:192) which enables more efficient video data processing and faster generation speed. To support and prevent finer details from being lost during generation, the Video-VAE decoder performs the latent to pixel conversion *and* the last denoising step.

You can find all the original LTX-Video checkpoints under the [Lightricks](https://huggingface.co/Lightricks) organization.

> [!TIP]
> Click on the LTX-Video models in the right sidebar for more examples of other video generation tasks.

The example below demonstrates how to generate a video optimized for memory or inference speed.

<hfoptions id="usage">
<hfoption id="memory">

Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques.

The LTX-Video model below requires ~10GB of VRAM.

```py
import torch
from diffusers import LTXPipeline, AutoModel
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video

# fp8 layerwise weight-casting
transformer = AutoModel.from_pretrained(
    "Lightricks/LTX-Video",
    subfolder="transformer",
    torch_dtype=torch.bfloat16
)
transformer.enable_layerwise_casting(
    storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16
)

pipeline = LTXPipeline.from_pretrained("Lightricks/LTX-Video", transformer=transformer, torch_dtype=torch.bfloat16)

# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", use_stream=True)
apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="block_level", num_blocks_per_group=2)
apply_group_offloading(pipeline.vae, onload_device=onload_device, offload_type="leaf_level")

prompt = """
A woman with long brown hair and light skin smiles at another woman with long blonde hair.
The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek.
The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and 
natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage
"""
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

video = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=768,
    height=512,
    num_frames=161,
    decode_timestep=0.03,
    decode_noise_scale=0.025,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```

</hfoption>
<hfoption id="inference speed">

[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster. [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.

```py
import torch
from diffusers import LTXPipeline
from diffusers.utils import export_to_video

pipeline = LTXPipeline.from_pretrained(
    "Lightricks/LTX-Video", torch_dtype=torch.bfloat16
)

# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
    pipeline.transformer, mode="max-autotune", fullgraph=True
)

prompt = """
A woman with long brown hair and light skin smiles at another woman with long blonde hair.
The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek.
The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and 
natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage
"""
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

video = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=768,
    height=512,
    num_frames=161,
    decode_timestep=0.03,
    decode_noise_scale=0.025,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```

</hfoption>
</hfoptions>

## Notes

- Refer to the following recommended settings for generation from the [LTX-Video](https://github.com/Lightricks/LTX-Video) repository.

  - The recommended dtype for the transformer, VAE, and text encoder is `torch.bfloat16`. The VAE and text encoder can also be `torch.float32` or `torch.float16`.
  - For guidance-distilled variants of LTX-Video, set `guidance_scale` to `1.0`. The `guidance_scale` for any other model should be set higher, like `5.0`, for good generation quality.
  - For timestep-aware VAE variants (LTX-Video 0.9.1 and above), set `decode_timestep` to `0.05` and `image_cond_noise_scale` to `0.025`.
  - For variants that support interpolation between multiple conditioning images and videos (LTX-Video 0.9.5 and above), use similar images and videos for the best results. Divergence from the conditioning inputs may lead to abrupt transitionts in the generated video.

- LTX-Video 0.9.7 includes a spatial latent upscaler and a 13B parameter transformer. During inference, a low resolution video is quickly generated first and then upscaled and refined.

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
  from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
  from diffusers.utils import export_to_video, load_video

  pipeline = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-dev", torch_dtype=torch.bfloat16)
  pipeline_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipeline.vae, torch_dtype=torch.bfloat16)
  pipeline.to("cuda")
  pipe_upsample.to("cuda")
  pipeline.vae.enable_tiling()

  def round_to_nearest_resolution_acceptable_by_vae(height, width):
      height = height - (height % pipeline.vae_temporal_compression_ratio)
      width = width - (width % pipeline.vae_temporal_compression_ratio)
      return height, width

  video = load_video(
      "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
  )[:21]  # only use the first 21 frames as conditioning
  condition1 = LTXVideoCondition(video=video, frame_index=0)

  prompt = """
  The video depicts a winding mountain road covered in snow, with a single vehicle 
  traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. 
  The landscape is characterized by rugged terrain and a river visible in the distance. 
  The scene captures the solitude and beauty of a winter drive through a mountainous region.
  """
  negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
  expected_height, expected_width = 768, 1152
  downscale_factor = 2 / 3
  num_frames = 161

  # 1. Generate video at smaller resolution
  # Text-only conditioning is also supported without the need to pass `conditions`
  downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
  downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
  latents = pipeline(
      conditions=[condition1],
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=downscaled_width,
      height=downscaled_height,
      num_frames=num_frames,
      num_inference_steps=30,
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=5.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="latent",
  ).frames

  # 2. Upscale generated video using latent upsampler with fewer inference steps
  # The available latent upsampler upscales the height/width by 2x
  upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
  upscaled_latents = pipe_upsample(
      latents=latents,
      output_type="latent"
  ).frames

  # 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
  video = pipeline(
      conditions=[condition1],
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=upscaled_width,
      height=upscaled_height,
      num_frames=num_frames,
      denoise_strength=0.4,  # Effectively, 4 inference steps out of 10
      num_inference_steps=10,
      latents=upscaled_latents,
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=5.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="pil",
  ).frames[0]

  # 4. Downscale the video to the expected resolution
  video = [frame.resize((expected_width, expected_height)) for frame in video]

  export_to_video(video, "output.mp4", fps=24)
  ```

  </details>

- LTX-Video 0.9.7 distilled model is guidance and timestep-distilled to speedup generation. It requires `guidance_scale` to be set to `1.0` and `num_inference_steps` should be set between `4` and `10` for good generation quality. You should also use the following custom timesteps for the best results.

  - Base model inference to prepare for upscaling: `[1000, 993, 987, 981, 975, 909, 725, 0.03]`.
  - Upscaling: `[1000, 909, 725, 421, 0]`.

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
  from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
  from diffusers.utils import export_to_video, load_video

  pipeline = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.7-distilled", torch_dtype=torch.bfloat16)
  pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained("Lightricks/ltxv-spatial-upscaler-0.9.7", vae=pipeline.vae, torch_dtype=torch.bfloat16)
  pipeline.to("cuda")
  pipe_upsample.to("cuda")
  pipeline.vae.enable_tiling()

  def round_to_nearest_resolution_acceptable_by_vae(height, width):
      height = height - (height % pipeline.vae_spatial_compression_ratio)
      width = width - (width % pipeline.vae_spatial_compression_ratio)
      return height, width

  prompt = """
  artistic anatomical 3d render, utlra quality, human half full male body with transparent 
  skin revealing structure instead of organs, muscular, intricate creative patterns, 
  monochromatic with backlighting, lightning mesh, scientific concept art, blending biology 
  with botany, surreal and ethereal quality, unreal engine 5, ray tracing, ultra realistic, 
  16K UHD, rich details. camera zooms out in a rotating fashion
  """
  negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
  expected_height, expected_width = 768, 1152
  downscale_factor = 2 / 3
  num_frames = 161

  # 1. Generate video at smaller resolution
  downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
  downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
  latents = pipeline(
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=downscaled_width,
      height=downscaled_height,
      num_frames=num_frames,
      timesteps=[1000, 993, 987, 981, 975, 909, 725, 0.03],
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=1.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="latent",
  ).frames

  # 2. Upscale generated video using latent upsampler with fewer inference steps
  # The available latent upsampler upscales the height/width by 2x
  upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
  upscaled_latents = pipe_upsample(
      latents=latents,
      adain_factor=1.0,
      output_type="latent"
  ).frames

  # 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
  video = pipeline(
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=upscaled_width,
      height=upscaled_height,
      num_frames=num_frames,
      denoise_strength=0.999,  # Effectively, 4 inference steps out of 5
      timesteps=[1000, 909, 725, 421, 0],
      latents=upscaled_latents,
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=1.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="pil",
  ).frames[0]

  # 4. Downscale the video to the expected resolution
  video = [frame.resize((expected_width, expected_height)) for frame in video]

  export_to_video(video, "output.mp4", fps=24)
  ```

  </details>

- LTX-Video 0.9.8 distilled model is similar to the 0.9.7 variant. It is guidance and timestep-distilled, and similar inference code can be used as above. An improvement of this version is that it supports generating very long videos. Additionally, it supports using tone mapping to improve the quality of the generated video using the `tone_map_compression_ratio` parameter. The default value of `0.6` is recommended.

  <details>
  <summary>Show example code</summary>
  
  ```python
  import torch
  from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
  from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
  from diffusers.pipelines.ltx.modeling_latent_upsampler import LTXLatentUpsamplerModel
  from diffusers.utils import export_to_video, load_video

  pipeline = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.8-13B-distilled", torch_dtype=torch.bfloat16)
  # TODO: Update the checkpoint here once updated in LTX org
  upsampler = LTXLatentUpsamplerModel.from_pretrained("a-r-r-o-w/LTX-0.9.8-Latent-Upsampler", torch_dtype=torch.bfloat16)
  pipe_upsample = LTXLatentUpsamplePipeline(vae=pipeline.vae, latent_upsampler=upsampler).to(torch.bfloat16)
  pipeline.to("cuda")
  pipe_upsample.to("cuda")
  pipeline.vae.enable_tiling()

  def round_to_nearest_resolution_acceptable_by_vae(height, width):
      height = height - (height % pipeline.vae_spatial_compression_ratio)
      width = width - (width % pipeline.vae_spatial_compression_ratio)
      return height, width

  prompt = """The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature."""
  # prompt = """A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage."""
  negative_prompt = "bright colors, symbols, graffiti, watermarks, worst quality, inconsistent motion, blurry, jittery, distorted"
  expected_height, expected_width = 480, 832
  downscale_factor = 2 / 3
  # num_frames = 161
  num_frames = 361

  # 1. Generate video at smaller resolution
  downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
  downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
  latents = pipeline(
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=downscaled_width,
      height=downscaled_height,
      num_frames=num_frames,
      timesteps=[1000, 993, 987, 981, 975, 909, 725, 0.03],
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=1.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="latent",
  ).frames

  # 2. Upscale generated video using latent upsampler with fewer inference steps
  # The available latent upsampler upscales the height/width by 2x
  upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
  upscaled_latents = pipe_upsample(
      latents=latents,
      adain_factor=1.0,
      tone_map_compression_ratio=0.6,
      output_type="latent"
  ).frames

  # 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
  video = pipeline(
      prompt=prompt,
      negative_prompt=negative_prompt,
      width=upscaled_width,
      height=upscaled_height,
      num_frames=num_frames,
      denoise_strength=0.999,  # Effectively, 4 inference steps out of 5
      timesteps=[1000, 909, 725, 421, 0],
      latents=upscaled_latents,
      decode_timestep=0.05,
      decode_noise_scale=0.025,
      image_cond_noise_scale=0.0,
      guidance_scale=1.0,
      guidance_rescale=0.7,
      generator=torch.Generator().manual_seed(0),
      output_type="pil",
  ).frames[0]

  # 4. Downscale the video to the expected resolution
  video = [frame.resize((expected_width, expected_height)) for frame in video]

  export_to_video(video, "output.mp4", fps=24)
  ```

  </details>

- LTX-Video supports LoRAs with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.LTXVideoLoraLoaderMixin.load_lora_weights).

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers import LTXConditionPipeline
  from diffusers.utils import export_to_video, load_image

  pipeline = LTXConditionPipeline.from_pretrained(
      "Lightricks/LTX-Video-0.9.5", torch_dtype=torch.bfloat16
  )

  pipeline.load_lora_weights("Lightricks/LTX-Video-Cakeify-LoRA", adapter_name="cakeify")
  pipeline.set_adapters("cakeify")

  # use "CAKEIFY" to trigger the LoRA
  prompt = "CAKEIFY a person using a knife to cut a cake shaped like a Pikachu plushie"
  image = load_image("https://huggingface.co/Lightricks/LTX-Video-Cakeify-LoRA/resolve/main/assets/images/pikachu.png")

  video = pipeline(
      prompt=prompt,
      image=image,
      width=576,
      height=576,
      num_frames=161,
      decode_timestep=0.03,
      decode_noise_scale=0.025,
      num_inference_steps=50,
  ).frames[0]
  export_to_video(video, "output.mp4", fps=26)
  ```

  </details>

- LTX-Video supports loading from single files, such as [GGUF checkpoints](../../quantization/gguf), with [loaders.FromOriginalModelMixin.from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromOriginalModelMixin.from_single_file) or [loaders.FromSingleFileMixin.from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file).

  <details>
  <summary>Show example code</summary>

  ```py
  import torch
  from diffusers.utils import export_to_video
  from diffusers import LTXPipeline, AutoModel, GGUFQuantizationConfig

  transformer = AutoModel.from_single_file(
      "https://huggingface.co/city96/LTX-Video-gguf/blob/main/ltx-video-2b-v0.9-Q3_K_S.gguf"
      quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
      torch_dtype=torch.bfloat16
  )
  pipeline = LTXPipeline.from_pretrained(
      "Lightricks/LTX-Video",
      transformer=transformer,
      torch_dtype=torch.bfloat16
  )
  ```

  </details>

## LTXPipeline[[diffusers.LTXPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LTXPipeline</name><anchor>diffusers.LTXPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx.py#L170</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLLTXVideo"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": LTXVideoTransformer3DModel"}]</parameters><paramsdesc>- **transformer** ([LTXVideoTransformer3DModel](/docs/diffusers/main/en/api/models/ltx_video_transformer3d#diffusers.LTXVideoTransformer3DModel)) --
  Conditional Transformer architecture to denoise the encoded video latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLLTXVideo](/docs/diffusers/main/en/api/models/autoencoderkl_ltx_video#diffusers.AutoencoderKLLTXVideo)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation.

Reference: https://github.com/Lightricks/LTX-Video





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LTXPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx.py#L535</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 704"}, {"name": "num_frames", "val": ": int = 161"}, {"name": "frame_rate", "val": ": int = 25"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 3"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decode_timestep", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "decode_noise_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 128"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `512`) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, defaults to `704`) --
  The width in pixels of the generated image. This is set to 848 by default for the best results.
- **num_frames** (`int`, defaults to `161`) --
  The number of video frames to generate
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, defaults to `3 `) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
  Guidance rescale factor should fix overexposure when using zero terminal SNR.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.FloatTensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **decode_timestep** (`float`, defaults to `0.0`) --
  The timestep at which generated video is decoded.
- **decode_noise_scale** (`float`, defaults to `None`) --
  The interpolation factor between random noise and denoised latents at the decode timestep.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.ltx.LTXPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `128 `) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.ltx.LTXPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `~pipelines.ltx.LTXPipelineOutput` is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LTXPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import LTXPipeline
>>> from diffusers.utils import export_to_video

>>> pipe = LTXPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage"
>>> negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

>>> video = pipe(
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     width=704,
...     height=480,
...     num_frames=161,
...     num_inference_steps=50,
... ).frames[0]
>>> export_to_video(video, "output.mp4", fps=24)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LTXPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx.py#L283</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 128"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## LTXImageToVideoPipeline[[diffusers.LTXImageToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LTXImageToVideoPipeline</name><anchor>diffusers.LTXImageToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_image2video.py#L189</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLLTXVideo"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": LTXVideoTransformer3DModel"}]</parameters><paramsdesc>- **transformer** ([LTXVideoTransformer3DModel](/docs/diffusers/main/en/api/models/ltx_video_transformer3d#diffusers.LTXVideoTransformer3DModel)) --
  Conditional Transformer architecture to denoise the encoded video latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLLTXVideo](/docs/diffusers/main/en/api/models/autoencoderkl_ltx_video#diffusers.AutoencoderKLLTXVideo)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-video generation.

Reference: https://github.com/Lightricks/LTX-Video





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LTXImageToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_image2video.py#L596</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 704"}, {"name": "num_frames", "val": ": int = 161"}, {"name": "frame_rate", "val": ": int = 25"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 3"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decode_timestep", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "decode_noise_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 128"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `512`) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, defaults to `704`) --
  The width in pixels of the generated image. This is set to 848 by default for the best results.
- **num_frames** (`int`, defaults to `161`) --
  The number of video frames to generate
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, defaults to `3 `) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
  Guidance rescale factor should fix overexposure when using zero terminal SNR.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.FloatTensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **decode_timestep** (`float`, defaults to `0.0`) --
  The timestep at which generated video is decoded.
- **decode_noise_scale** (`float`, defaults to `None`) --
  The interpolation factor between random noise and denoised latents at the decode timestep.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.ltx.LTXPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `128 `) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.ltx.LTXPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `~pipelines.ltx.LTXPipelineOutput` is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LTXImageToVideoPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import LTXImageToVideoPipeline
>>> from diffusers.utils import export_to_video, load_image

>>> pipe = LTXImageToVideoPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/datasets/a-r-r-o-w/tiny-meme-dataset-captioned/resolve/main/images/8.png"
... )
>>> prompt = "A young girl stands calmly in the foreground, looking directly at the camera, as a house fire rages in the background. Flames engulf the structure, with smoke billowing into the air. Firefighters in protective gear rush to the scene, a fire truck labeled '38' visible behind them. The girl's neutral expression contrasts sharply with the chaos of the fire, creating a poignant and emotionally charged scene."
>>> negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

>>> video = pipe(
...     image=image,
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     width=704,
...     height=480,
...     num_frames=161,
...     num_inference_steps=50,
... ).frames[0]
>>> export_to_video(video, "output.mp4", fps=24)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LTXImageToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_image2video.py#L306</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 128"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## LTXConditionPipeline[[diffusers.LTXConditionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LTXConditionPipeline</name><anchor>diffusers.LTXConditionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py#L252</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLLTXVideo"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": LTXVideoTransformer3DModel"}]</parameters><paramsdesc>- **transformer** ([LTXVideoTransformer3DModel](/docs/diffusers/main/en/api/models/ltx_video_transformer3d#diffusers.LTXVideoTransformer3DModel)) --
  Conditional Transformer architecture to denoise the encoded video latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLLTXVideo](/docs/diffusers/main/en/api/models/autoencoderkl_ltx_video#diffusers.AutoencoderKLLTXVideo)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text/image/video-to-video generation.

Reference: https://github.com/Lightricks/LTX-Video





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LTXConditionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py#L848</source><parameters>[{"name": "conditions", "val": ": typing.Union[diffusers.pipelines.ltx.pipeline_ltx_condition.LTXVideoCondition, typing.List[diffusers.pipelines.ltx.pipeline_ltx_condition.LTXVideoCondition]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "video", "val": ": typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]] = None"}, {"name": "frame_index", "val": ": typing.Union[int, typing.List[int]] = 0"}, {"name": "strength", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "denoise_strength", "val": ": float = 1.0"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 704"}, {"name": "num_frames", "val": ": int = 161"}, {"name": "frame_rate", "val": ": int = 25"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 3"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "image_cond_noise_scale", "val": ": float = 0.15"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decode_timestep", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "decode_noise_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **conditions** (`List[LTXVideoCondition], *optional*`) --
  The list of frame-conditioning items for the video generation.If not provided, conditions will be
  created using `image`, `video`, `frame_index` and `strength`.
- **image** (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*) --
  The image or images to condition the video generation. If not provided, one has to pass `video` or
  `conditions`.
- **video** (`List[PipelineImageInput]`, *optional*) --
  The video to condition the video generation. If not provided, one has to pass `image` or `conditions`.
- **frame_index** (`int` or `List[int]`, *optional*) --
  The frame index or frame indices at which the image or video will conditionally effect the video
  generation. If not provided, one has to pass `conditions`.
- **strength** (`float` or `List[float]`, *optional*) --
  The strength or strengths of the conditioning effect. If not provided, one has to pass `conditions`.
- **denoise_strength** (`float`, defaults to `1.0`) --
  The strength of the noise added to the latents for editing. Higher strength leads to more noise added
  to the latents, therefore leading to more differences between original video and generated video. This
  is useful for video-to-video editing.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `512`) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, defaults to `704`) --
  The width in pixels of the generated image. This is set to 848 by default for the best results.
- **num_frames** (`int`, defaults to `161`) --
  The number of video frames to generate
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, defaults to `3 `) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
  Guidance rescale factor should fix overexposure when using zero terminal SNR.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.FloatTensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **decode_timestep** (`float`, defaults to `0.0`) --
  The timestep at which generated video is decoded.
- **decode_noise_scale** (`float`, defaults to `None`) --
  The interpolation factor between random noise and denoised latents at the decode timestep.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.ltx.LTXPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `128 `) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.ltx.LTXPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `~pipelines.ltx.LTXPipelineOutput` is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.LTXConditionPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXConditionPipeline, LTXVideoCondition
>>> from diffusers.utils import export_to_video, load_video, load_image

>>> pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.5", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> # Load input image and video
>>> video = load_video(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
... )
>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input.jpg"
... )

>>> # Create conditioning objects
>>> condition1 = LTXVideoCondition(
...     image=image,
...     frame_index=0,
... )
>>> condition2 = LTXVideoCondition(
...     video=video,
...     frame_index=80,
... )

>>> prompt = "The video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day. And then the camera switch to a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
>>> negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

>>> # Generate video
>>> generator = torch.Generator("cuda").manual_seed(0)
>>> # Text-only conditioning is also supported without the need to pass `conditions`
>>> video = pipe(
...     conditions=[condition1, condition2],
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     width=768,
...     height=512,
...     num_frames=161,
...     num_inference_steps=40,
...     generator=generator,
... ).frames[0]

>>> export_to_video(video, "output.mp4", fps=24)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_noise_to_image_conditioning_latents</name><anchor>diffusers.LTXConditionPipeline.add_noise_to_image_conditioning_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py#L646</source><parameters>[{"name": "t", "val": ": float"}, {"name": "init_latents", "val": ": Tensor"}, {"name": "latents", "val": ": Tensor"}, {"name": "noise_scale", "val": ": float"}, {"name": "conditioning_mask", "val": ": Tensor"}, {"name": "generator", "val": ""}, {"name": "eps", "val": " = 1e-06"}]</parameters></docstring>

Add timestep-dependent noise to the hard-conditioning latents. This helps with motion continuity, especially
when conditioned on a single frame.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.LTXConditionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py#L369</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>trim_conditioning_sequence</name><anchor>diffusers.LTXConditionPipeline.trim_conditioning_sequence</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py#L629</source><parameters>[{"name": "start_frame", "val": ": int"}, {"name": "sequence_num_frames", "val": ": int"}, {"name": "target_num_frames", "val": ": int"}]</parameters><paramsdesc>- **start_frame** (int) -- The target frame number of the first frame in the sequence.
- **sequence_num_frames** (int) -- The number of frames in the sequence.
- **target_num_frames** (int) -- The target number of frames in the generated video.</paramsdesc><paramgroups>0</paramgroups><rettype>int</rettype><retdesc>updated sequence length</retdesc></docstring>

Trim a conditioning sequence to the allowed number of frames.








</div></div>

## LTXLatentUpsamplePipeline[[diffusers.LTXLatentUpsamplePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LTXLatentUpsamplePipeline</name><anchor>diffusers.LTXLatentUpsamplePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L46</source><parameters>[{"name": "vae", "val": ": AutoencoderKLLTXVideo"}, {"name": "latent_upsampler", "val": ": LTXLatentUpsamplerModel"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.LTXLatentUpsamplePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L243</source><parameters>[{"name": "video", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 704"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "decode_timestep", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "decode_noise_scale", "val": ": typing.Union[float, typing.List[float], NoneType] = None"}, {"name": "adain_factor", "val": ": float = 0.0"}, {"name": "tone_map_compression_ratio", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>adain_filter_latent</name><anchor>diffusers.LTXLatentUpsamplePipeline.adain_filter_latent</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L96</source><parameters>[{"name": "latents", "val": ": Tensor"}, {"name": "reference_latents", "val": ": Tensor"}, {"name": "factor", "val": ": float = 1.0"}]</parameters><paramsdesc>- **latent** (`torch.Tensor`) --
  Input latents to normalize
- **reference_latents** (`torch.Tensor`) --
  The reference latents providing style statistics.
- **factor** (`float`) --
  Blending factor between original and transformed latent. Range: -10.0 to 10.0, Default: 1.0</paramsdesc><paramgroups>0</paramgroups><rettype>torch.Tensor</rettype><retdesc>The transformed latent tensor</retdesc></docstring>

Applies Adaptive Instance Normalization (AdaIN) to a latent tensor based on statistics from a reference latent
tensor.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.LTXLatentUpsamplePipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L191</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.LTXLatentUpsamplePipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L218</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.LTXLatentUpsamplePipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L178</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.LTXLatentUpsamplePipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L204</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tone_map_latents</name><anchor>diffusers.LTXLatentUpsamplePipeline.tone_map_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_ltx_latent_upsample.py#L124</source><parameters>[{"name": "latents", "val": ": Tensor"}, {"name": "compression", "val": ": float"}]</parameters><paramsdesc>- **latents**  -- torch.Tensor
  Input latent tensor with arbitrary shape. Expected to be roughly in [-1, 1] or [0, 1] range.
- **compression**  -- float
  Compression strength in the range [0, 1].
  - 0.0: No tone-mapping (identity transform)
  - 1.0: Full compression effect</paramsdesc><paramgroups>0</paramgroups><retdesc>torch.Tensor
The tone-mapped latent tensor of the same shape as input.</retdesc></docstring>

Applies a non-linear tone-mapping function to latent values to reduce their dynamic range in a perceptually
smooth way using a sigmoid-based compression.

This is useful for regularizing high-variance latents or for conditioning outputs during generation, especially
when controlling dynamic behavior with a `compression` factor.






</div></div>

## LTXPipelineOutput[[diffusers.pipelines.ltx.pipeline_output.LTXPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.ltx.pipeline_output.LTXPipelineOutput</name><anchor>diffusers.pipelines.ltx.pipeline_output.LTXPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ltx/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for LTX pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/ltx_video.md" />

### Mochi
https://huggingface.co/docs/diffusers/main/api/pipelines/mochi.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# Mochi 1 Preview

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

> [!TIP]
> Only a research preview of the model weights is available at the moment.

[Mochi 1](https://huggingface.co/genmo/mochi-1-preview) is a video generation model by Genmo with a strong focus on prompt adherence and motion quality. The model features a 10B parameter Asmmetric Diffusion Transformer (AsymmDiT) architecture, and uses non-square QKV and output projection layers to reduce inference memory requirements. A single T5-XXL model is used to encode prompts.

*Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems. The model is released under a permissive Apache 2.0 license.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [MochiPipeline](/docs/diffusers/main/en/api/pipelines/mochi#diffusers.MochiPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, MochiTransformer3DModel, MochiPipeline
from diffusers.utils import export_to_video
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "genmo/mochi-1-preview",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = MochiTransformer3DModel.from_pretrained(
    "genmo/mochi-1-preview",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = MochiPipeline.from_pretrained(
    "genmo/mochi-1-preview",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

video = pipeline(
  "Close-up of a cats eye, with the galaxy reflected in the cats eye. Ultra high resolution 4k.",
  num_inference_steps=28,
  guidance_scale=3.5
).frames[0]
export_to_video(video, "cat.mp4")
```

## Generating videos with Mochi-1 Preview

The following example will download the full precision `mochi-1-preview` weights and produce the highest quality results but will require at least 42GB VRAM to run.

```python
import torch
from diffusers import MochiPipeline
from diffusers.utils import export_to_video

pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview")

# Enable memory savings
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()

prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."

with torch.autocast("cuda", torch.bfloat16, cache_enabled=False):
      frames = pipe(prompt, num_frames=85).frames[0]

export_to_video(frames, "mochi.mp4", fps=30)
```

## Using a lower precision variant to save memory

The following example will use the `bfloat16` variant of the model and requires 22GB VRAM to run. There is a slight drop in the quality of the generated video as a result.

```python
import torch
from diffusers import MochiPipeline
from diffusers.utils import export_to_video

pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview", variant="bf16", torch_dtype=torch.bfloat16)

# Enable memory savings
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()

prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
frames = pipe(prompt, num_frames=85).frames[0]

export_to_video(frames, "mochi.mp4", fps=30)
```

## Reproducing the results from the Genmo Mochi repo

The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the original implementation, please refer to the following example.

> [!TIP]
> The original Mochi implementation zeros out empty prompts. However, enabling this option and placing the entire pipeline under autocast can lead to numerical overflows with the T5 text encoder.
>
> When enabling `force_zeros_for_empty_prompt`, it is recommended to run the text encoding step outside the autocast context in full precision.

> [!TIP]
> Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16`.

```python
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel

from diffusers import MochiPipeline
from diffusers.utils import export_to_video
from diffusers.video_processor import VideoProcessor

pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview", force_zeros_for_empty_prompt=True)
pipe.enable_vae_tiling()
pipe.enable_model_cpu_offload()

prompt =  "An aerial shot of a parade of elephants walking across the African savannah. The camera showcases the herd and the surrounding landscape."

with torch.no_grad():
    prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask = (
        pipe.encode_prompt(prompt=prompt)
    )

with torch.autocast("cuda", torch.bfloat16):
    with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):
        frames = pipe(
            prompt_embeds=prompt_embeds,
            prompt_attention_mask=prompt_attention_mask,
            negative_prompt_embeds=negative_prompt_embeds,
            negative_prompt_attention_mask=negative_prompt_attention_mask,
            guidance_scale=4.5,
            num_inference_steps=64,
            height=480,
            width=848,
            num_frames=163,
            generator=torch.Generator("cuda").manual_seed(0),
            output_type="latent",
            return_dict=False,
        )[0]

video_processor = VideoProcessor(vae_scale_factor=8)
has_latents_mean = hasattr(pipe.vae.config, "latents_mean") and pipe.vae.config.latents_mean is not None
has_latents_std = hasattr(pipe.vae.config, "latents_std") and pipe.vae.config.latents_std is not None
if has_latents_mean and has_latents_std:
    latents_mean = (
        torch.tensor(pipe.vae.config.latents_mean).view(1, 12, 1, 1, 1).to(frames.device, frames.dtype)
    )
    latents_std = (
        torch.tensor(pipe.vae.config.latents_std).view(1, 12, 1, 1, 1).to(frames.device, frames.dtype)
    )
    frames = frames * latents_std / pipe.vae.config.scaling_factor + latents_mean
else:
    frames = frames / pipe.vae.config.scaling_factor

with torch.no_grad():
    video = pipe.vae.decode(frames.to(pipe.vae.dtype), return_dict=False)[0]

video = video_processor.postprocess_video(video)[0]
export_to_video(video, "mochi.mp4", fps=30)
```

## Running inference with multiple GPUs

It is possible to split the large Mochi transformer across multiple GPUs using the `device_map` and `max_memory` options in `from_pretrained`. In the following example we split the model across two GPUs, each with 24GB of VRAM.

```python
import torch
from diffusers import MochiPipeline, MochiTransformer3DModel
from diffusers.utils import export_to_video

model_id = "genmo/mochi-1-preview"
transformer = MochiTransformer3DModel.from_pretrained(
    model_id,
    subfolder="transformer",
    device_map="auto",
    max_memory={0: "24GB", 1: "24GB"}
)

pipe = MochiPipeline.from_pretrained(model_id,  transformer=transformer)
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()

with torch.autocast(device_type="cuda", dtype=torch.bfloat16, cache_enabled=False):
    frames = pipe(
        prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k.",
        negative_prompt="",
        height=480,
        width=848,
        num_frames=85,
        num_inference_steps=50,
        guidance_scale=4.5,
        num_videos_per_prompt=1,
        generator=torch.Generator(device="cuda").manual_seed(0),
        max_sequence_length=256,
        output_type="pil",
    ).frames[0]

export_to_video(frames, "output.mp4", fps=30)
```

## Using single file loading with the Mochi Transformer

You can use `from_single_file` to load the Mochi transformer in its original format.

> [!TIP]
> Diffusers currently doesn't support using the FP8 scaled versions of the Mochi single file checkpoints.

```python
import torch
from diffusers import MochiPipeline, MochiTransformer3DModel
from diffusers.utils import export_to_video

model_id = "genmo/mochi-1-preview"

ckpt_path = "https://huggingface.co/Comfy-Org/mochi_preview_repackaged/blob/main/split_files/diffusion_models/mochi_preview_bf16.safetensors"

transformer = MochiTransformer3DModel.from_pretrained(ckpt_path, torch_dtype=torch.bfloat16)

pipe = MochiPipeline.from_pretrained(model_id,  transformer=transformer)
pipe.enable_model_cpu_offload()
pipe.enable_vae_tiling()

with torch.autocast(device_type="cuda", dtype=torch.bfloat16, cache_enabled=False):
    frames = pipe(
        prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k.",
        negative_prompt="",
        height=480,
        width=848,
        num_frames=85,
        num_inference_steps=50,
        guidance_scale=4.5,
        num_videos_per_prompt=1,
        generator=torch.Generator(device="cuda").manual_seed(0),
        max_sequence_length=256,
        output_type="pil",
    ).frames[0]

export_to_video(frames, "output.mp4", fps=30)
```

## MochiPipeline[[diffusers.MochiPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MochiPipeline</name><anchor>diffusers.MochiPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L138</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKLMochi"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": MochiTransformer3DModel"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = False"}]</parameters><paramsdesc>- **transformer** ([MochiTransformer3DModel](/docs/diffusers/main/en/api/models/mochi_transformer3d#diffusers.MochiTransformer3DModel)) --
  Conditional Transformer architecture to denoise the encoded video latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLMochi](/docs/diffusers/main/en/api/models/autoencoderkl_mochi#diffusers.AutoencoderKLMochi)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).</paramsdesc><paramgroups>0</paramgroups></docstring>

The mochi pipeline for text-to-video generation.

Reference: https://github.com/genmoai/models





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.MochiPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L497</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_frames", "val": ": int = 19"}, {"name": "num_inference_steps", "val": ": int = 64"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, *optional*, defaults to `self.default_height`) --
  The height in pixels of the generated image. This is set to 480 by default for the best results.
- **width** (`int`, *optional*, defaults to `self.default_width`) --
  The width in pixels of the generated image. This is set to 848 by default for the best results.
- **num_frames** (`int`, defaults to `19`) --
  The number of video frames to generate
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, defaults to `4.5`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.FloatTensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.mochi.MochiPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `256`) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.mochi.MochiPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `~pipelines.mochi.MochiPipelineOutput` is returned, otherwise a `tuple`
is returned where the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.MochiPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import MochiPipeline
>>> from diffusers.utils import export_to_video

>>> pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview", torch_dtype=torch.bfloat16)
>>> pipe.enable_model_cpu_offload()
>>> pipe.enable_vae_tiling()
>>> prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
>>> frames = pipe(prompt, num_inference_steps=28, guidance_scale=3.5).frames[0]
>>> export_to_video(frames, "mochi.mp4")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.MochiPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L403</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.MochiPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L430</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.MochiPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L390</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.MochiPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L416</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.MochiPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_mochi.py#L254</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## MochiPipelineOutput[[diffusers.pipelines.mochi.pipeline_output.MochiPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.mochi.pipeline_output.MochiPipelineOutput</name><anchor>diffusers.pipelines.mochi.pipeline_output.MochiPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/mochi/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Mochi pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/mochi.md" />

### Bria 3.2
https://huggingface.co/docs/diffusers/main/api/pipelines/bria_3_2.md

# Bria 3.2

Bria 3.2 is the next-generation commercial-ready text-to-image model. With just 4 billion parameters, it provides exceptional aesthetics and text rendering, evaluated to provide on par results to leading open-source models, and outperforming other licensed models.
In addition to being built entirely on licensed data, 3.2 provides several advantages for enterprise and commercial use:

- Efficient Compute - the model is X3 smaller than the equivalent models in the market (4B parameters vs 12B parameters other open source models)
- Architecture Consistency: Same architecture as 3.1—ideal for users looking to upgrade without disruption.
- Fine-tuning Speedup: 2x faster fine-tuning on L40S and A100.

Original model checkpoints for Bria 3.2 can be found [here](https://huggingface.co/briaai/BRIA-3.2).
Github repo for Bria 3.2 can be found [here](https://github.com/Bria-AI/BRIA-3.2).

If you want to learn more about the Bria platform, and get free traril access, please visit [bria.ai](https://bria.ai).


## Usage

_As the model is gated, before using it with diffusers you first need to go to the [Bria 3.2 Hugging Face page](https://huggingface.co/briaai/BRIA-3.2), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._

Use the command below to log in:

```bash
hf auth login
```


## BriaPipeline[[diffusers.BriaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.BriaPipeline</name><anchor>diffusers.BriaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/bria/pipeline_bria.py#L89</source><parameters>[{"name": "transformer", "val": ": BriaTransformer2DModel"}, {"name": "scheduler", "val": ": typing.Union[diffusers.schedulers.scheduling_flow_match_euler_discrete.FlowMatchEulerDiscreteScheduler, diffusers.schedulers.scheduling_utils.KarrasDiffusionSchedulers]"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "tokenizer", "val": ": T5TokenizerFast"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([BriaTransformer2DModel](/docs/diffusers/main/en/api/models/bria_transformer#diffusers.BriaTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. Bria uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

Based on FluxPipeline with several changes:
- no pooled embeddings
- We use zero padding for prompts
- No guidance embedding since this is not a distilled version





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.BriaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/bria/pipeline_bria.py#L448</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 30"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 128"}, {"name": "clip_value", "val": ": typing.Optional[float] = None"}, {"name": "normalize", "val": ": bool = False"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.bria.BriaPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.bria.BriaPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.bria.BriaPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.BriaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import BriaPipeline

>>> pipe = BriaPipeline.from_pretrained("briaai/BRIA-3.2", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
# BRIA's T5 text encoder is sensitive to precision. We need to cast it to bfloat16 and keep the final layer in float32.

>>> pipe.text_encoder = pipe.text_encoder.to(dtype=torch.bfloat16)
>>> for block in pipe.text_encoder.encoder.block:
...     block.layer[-1].DenseReluDense.wo.to(dtype=torch.float32)
# BRIA's VAE is not supported in mixed precision, so we use float32.

>>> if pipe.vae.config.shift_factor == 0:
...     pipe.vae.to(dtype=torch.float32)

>>> prompt = "Photorealistic food photography of a stack of fluffy pancakes on a white plate, with maple syrup being poured over them. On top of the pancakes are the words 'BRIA 3.2' in bold, yellow, 3D letters. The background is dark and out of focus."
>>> image = pipe(prompt).images[0]
>>> image.save("bria.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.BriaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/bria/pipeline_bria.py#L146</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 128"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/bria_3_2.md" />

### Würstchen
https://huggingface.co/docs/diffusers/main/api/pipelines/wuerstchen.md

# Würstchen

> [!WARNING]
> This pipeline is deprecated but it can still be used. However, we won't test the pipeline anymore and won't accept any changes to it. If you run into any issues, reinstall the last Diffusers version that supported this model.

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

<img src="https://github.com/dome272/Wuerstchen/assets/61938694/0617c863-165a-43ee-9303-2a17299a0cf9">

[Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models](https://huggingface.co/papers/2306.00637) is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville.

The abstract from the paper is:

*We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1's 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility.*

## Würstchen Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://huggingface.co/papers/2306.00637)). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference.

## Würstchen v2 comes to Diffusers

After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements.

- Higher resolution (1024x1024 up to 2048x2048)
- Faster inference
- Multi Aspect Resolution Sampling
- Better quality


We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are:

- v2-base
- v2-aesthetic
- **(default)** v2-interpolated (50% interpolation between v2-base and v2-aesthetic)

We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations.
A comparison can be seen here:

<img src="https://github.com/dome272/Wuerstchen/assets/61938694/2914830f-cbd3-461c-be64-d50734f4b49d" width=500>

## Text-to-Image Generation

For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows:

```python
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS

pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda")

caption = "Anthropomorphic cat dressed as a fire fighter"
images = pipe(
    caption,
    width=1024,
    height=1536,
    prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS,
    prior_guidance_scale=4.0,
    num_images_per_prompt=2,
).images
```

For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the `prior_pipeline`. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the `decoder_pipeline`. For more details, take a look at the [paper](https://huggingface.co/papers/2306.00637).

```python
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS

device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2

prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
    "warp-ai/wuerstchen-prior", torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
    "warp-ai/wuerstchen", torch_dtype=dtype
).to(device)

caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = ""

prior_output = prior_pipeline(
    prompt=caption,
    height=1024,
    width=1536,
    timesteps=DEFAULT_STAGE_C_TIMESTEPS,
    negative_prompt=negative_prompt,
    guidance_scale=4.0,
    num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
    image_embeddings=prior_output.image_embeddings,
    prompt=caption,
    negative_prompt=negative_prompt,
    guidance_scale=0.0,
    output_type="pil",
).images[0]
decoder_output
```

## Speed-Up Inference
You can make use of `torch.compile` function and gain a speed-up of about 2-3x:

```python
prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True)
decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True)
```

## Limitations

- Due to the high compression employed by Würstchen, generations can lack a good amount
of detail. To our human eye, this is especially noticeable in faces, hands etc.
- **Images can only be generated in 128-pixel steps**, e.g. the next higher resolution
after 1024x1024 is 1152x1152
- The model lacks the ability to render correct text in images
- The model often does not achieve photorealism
- Difficult compositional prompts are hard for the model

The original codebase, as well as experimental ideas, can be found at [dome272/Wuerstchen](https://github.com/dome272/Wuerstchen).


## WuerstchenCombinedPipeline[[diffusers.WuerstchenCombinedPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WuerstchenCombinedPipeline</name><anchor>diffusers.WuerstchenCombinedPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py#L43</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "decoder", "val": ": WuerstchenDiffNeXt"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "vqgan", "val": ": PaellaVQModel"}, {"name": "prior_tokenizer", "val": ": CLIPTokenizer"}, {"name": "prior_text_encoder", "val": ": CLIPTextModel"}, {"name": "prior_prior", "val": ": WuerstchenPrior"}, {"name": "prior_scheduler", "val": ": DDPMWuerstchenScheduler"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  The decoder tokenizer to be used for text inputs.
- **text_encoder** (`CLIPTextModel`) --
  The decoder text encoder to be used for text inputs.
- **decoder** (`WuerstchenDiffNeXt`) --
  The decoder model to be used for decoder image generation pipeline.
- **scheduler** (`DDPMWuerstchenScheduler`) --
  The scheduler to be used for decoder image generation pipeline.
- **vqgan** (`PaellaVQModel`) --
  The VQGAN model to be used for decoder image generation pipeline.
- **prior_tokenizer** (`CLIPTokenizer`) --
  The prior tokenizer to be used for text inputs.
- **prior_text_encoder** (`CLIPTextModel`) --
  The prior text encoder to be used for text inputs.
- **prior_prior** (`WuerstchenPrior`) --
  The prior model to be used for prior pipeline.
- **prior_scheduler** (`DDPMWuerstchenScheduler`) --
  The scheduler to be used for prior pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Combined Pipeline for text-to-image generation using Wuerstchen

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WuerstchenCombinedPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py#L144</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "prior_num_inference_steps", "val": ": int = 60"}, {"name": "prior_timesteps", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "prior_guidance_scale", "val": ": float = 4.0"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "decoder_timesteps", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "decoder_guidance_scale", "val": ": float = 0.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "prior_callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "prior_callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation for the prior and decoder.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, *e.g.*
  prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to 512) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width in pixels of the generated image.
- **prior_guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `prior_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `prior_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **prior_num_inference_steps** (`Union[int, Dict[float, int]]`, *optional*, defaults to 60) --
  The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. For more specific timestep spacing, you can pass customized
  `prior_timesteps`
- **num_inference_steps** (`int`, *optional*, defaults to 12) --
  The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at
  the expense of slower inference. For more specific timestep spacing, you can pass customized
  `timesteps`
- **prior_timesteps** (`List[float]`, *optional*) --
  Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced
  `prior_num_inference_steps` timesteps are used. Must be in descending order.
- **decoder_timesteps** (`List[float]`, *optional*) --
  Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced
  `num_inference_steps` timesteps are used. Must be in descending order.
- **decoder_guidance_scale** (`float`, *optional*, defaults to 0.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **prior_callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep:
  int, callback_kwargs: Dict)`.
- **prior_callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the
  list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in
  the `._callback_tensor_inputs` attribute of your pipeline class.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple` [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) if `return_dict` is True,
otherwise a `tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WuerstchenCombinedPipeline.__call__.example">

Examples:
```py
>>> from diffusions import WuerstchenCombinedPipeline

>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to(
...     "cuda"
... )
>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> images = pipe(prompt=prompt)
```

</ExampleCodeBlock>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.WuerstchenCombinedPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py#L116</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.WuerstchenCombinedPipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py#L126</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters></docstring>

Offloads all models (`unet`, `text_encoder`, `vae`, and `safety checker` state dicts) to CPU using 🤗
Accelerate, significantly reducing memory usage. Models are moved to a `torch.device('meta')` and loaded on a
GPU only when their specific submodule's `forward` method is called. Offloading happens on a submodule basis.
Memory savings are higher than using `enable_model_cpu_offload`, but performance is lower.


</div></div>

## WuerstchenPriorPipeline[[diffusers.WuerstchenPriorPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WuerstchenPriorPipeline</name><anchor>diffusers.WuerstchenPriorPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py#L73</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "prior", "val": ": WuerstchenPrior"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "latent_mean", "val": ": float = 42.0"}, {"name": "latent_std", "val": ": float = 1.0"}, {"name": "resolution_multiple", "val": ": float = 42.67"}]</parameters><paramsdesc>- **prior** (`Prior`) --
  The canonical unCLIP prior to approximate the image embedding from the text embedding.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  Frozen text-encoder.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **scheduler** (`DDPMWuerstchenScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **latent_mean** ('float', *optional*, defaults to 42.0) --
  Mean value for latent diffusers.
- **latent_std** ('float', *optional*, defaults to 1.0) --
  Standard value for latent diffusers.
- **resolution_multiple** ('float', *optional*, defaults to 42.67) --
  Default resolution for multiple images generated.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating image prior for Wuerstchen.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WuerstchenPriorPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py#L289</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "num_inference_steps", "val": ": int = 60"}, {"name": "timesteps", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 8.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pt'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **height** (`int`, *optional*, defaults to 1024) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to 1024) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 60) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 8.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `decoder_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `decoder_guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>`~pipelines.WuerstchenPriorPipelineOutput` or `tuple` `~pipelines.WuerstchenPriorPipelineOutput` if
`return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
generated image embeddings.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WuerstchenPriorPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import WuerstchenPriorPipeline

>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained(
...     "warp-ai/wuerstchen-prior", torch_dtype=torch.float16
... ).to("cuda")

>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt)
```

</ExampleCodeBlock>





</div></div>

## WuerstchenPriorPipelineOutput[[diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput</name><anchor>diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py#L60</source><parameters>[{"name": "image_embeddings", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}]</parameters><paramsdesc>- **image_embeddings** (`torch.Tensor` or `np.ndarray`) --
  Prior image embeddings for text prompt</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for WuerstchenPriorPipeline.




</div>

## WuerstchenDecoderPipeline[[diffusers.WuerstchenDecoderPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WuerstchenDecoderPipeline</name><anchor>diffusers.WuerstchenDecoderPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py#L59</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "decoder", "val": ": WuerstchenDiffNeXt"}, {"name": "scheduler", "val": ": DDPMWuerstchenScheduler"}, {"name": "vqgan", "val": ": PaellaVQModel"}, {"name": "latent_dim_scale", "val": ": float = 10.67"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  The CLIP tokenizer.
- **text_encoder** (`CLIPTextModel`) --
  The CLIP text encoder.
- **decoder** (`WuerstchenDiffNeXt`) --
  The WuerstchenDiffNeXt unet decoder.
- **vqgan** (`PaellaVQModel`) --
  The VQGAN model.
- **scheduler** (`DDPMWuerstchenScheduler`) --
  A scheduler to be used in combination with `prior` to generate image embedding.
- **latent_dim_scale** (float, `optional`, defaults to 10.67) --
  Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are
  height=24 and width=24, the VQ latent shape needs to be height=int(24*10.67)=256 and
  width=int(24*10.67)=256 in order to match the training conditions.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for generating images from the Wuerstchen model.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WuerstchenDecoderPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py#L216</source><parameters>[{"name": "image_embeddings", "val": ": typing.Union[torch.Tensor, typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_inference_steps", "val": ": int = 12"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 0.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **image_embedding** (`torch.Tensor` or `List[torch.Tensor]`) --
  Image Embeddings either extracted from an image or generated by a Prior Model.
- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide the image generation.
- **num_inference_steps** (`int`, *optional*, defaults to 12) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
  timesteps are used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 0.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `decoder_guidance_scale` is defined as `w` of
  equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by
  setting `decoder_guidance_scale > 1`. Higher guidance scale encourages to generate images that are
  closely linked to the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `decoder_guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><retdesc>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple` [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) if `return_dict` is True,
otherwise a `tuple`. When returning a tuple, the first element is a list with the generated image
embeddings.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WuerstchenDecoderPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline

>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained(
...     "warp-ai/wuerstchen-prior", torch_dtype=torch.float16
... ).to("cuda")
>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to(
...     "cuda"
... )

>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt)
>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt)
```

</ExampleCodeBlock>





</div></div>

## Citation

```bibtex
      @misc{pernias2023wuerstchen,
            title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models},
            author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville},
            year={2023},
            eprint={2306.00637},
            archivePrefix={arXiv},
            primaryClass={cs.CV}
      }
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/wuerstchen.md" />

### Text2Video-Zero
https://huggingface.co/docs/diffusers/main/api/pipelines/text_to_video_zero.md

# Text2Video-Zero

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators](https://huggingface.co/papers/2303.13439) is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, [Zhangyang Wang](https://www.ece.utexas.edu/people/faculty/atlas-wang), Shant Navasardyan, [Humphrey Shi](https://www.humphreyshi.com).

Text2Video-Zero enables zero-shot video generation using either:
1. A textual prompt
2. A prompt combined with guidance from poses or edges
3. Video Instruct-Pix2Pix (instruction-guided video editing)

Results are temporally consistent and closely follow the guidance and textual prompts.

![teaser-img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2v_zero_teaser.png)

The abstract from the paper is:

*Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain.
Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object.
Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing.
As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.*

You can find additional information about Text2Video-Zero on the [project page](https://text2video-zero.github.io/), [paper](https://huggingface.co/papers/2303.13439), and [original codebase](https://github.com/Picsart-AI-Research/Text2Video-Zero).

## Usage example

### Text-To-Video

To generate a video from prompt, run the following Python code:
```python
import torch
from diffusers import TextToVideoZeroPipeline
import imageio

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

prompt = "A panda is playing guitar on times square"
result = pipe(prompt=prompt).images
result = [(r * 255).astype("uint8") for r in result]
imageio.mimsave("video.mp4", result, fps=4)
```
You can change these parameters in the pipeline call:
* Motion field strength (see the [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1):
    * `motion_field_strength_x` and `motion_field_strength_y`. Default: `motion_field_strength_x=12`, `motion_field_strength_y=12`
* `T` and `T'` (see the [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1)
    * `t0` and `t1` in the range `{0, ..., num_inference_steps}`. Default: `t0=45`, `t1=48`
* Video length:
    * `video_length`, the number of frames video_length to be generated. Default: `video_length=8`

We can also generate longer videos by doing the processing in a chunk-by-chunk manner:
```python
import torch
from diffusers import TextToVideoZeroPipeline
import numpy as np

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
seed = 0
video_length = 24  #24 ÷ 4fps = 6 seconds
chunk_size = 8
prompt = "A panda is playing guitar on times square"

# Generate the video chunk-by-chunk
result = []
chunk_ids = np.arange(0, video_length, chunk_size - 1)
generator = torch.Generator(device="cuda")
for i in range(len(chunk_ids)):
    print(f"Processing chunk {i + 1} / {len(chunk_ids)}")
    ch_start = chunk_ids[i]
    ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1]
    # Attach the first frame for Cross Frame Attention
    frame_ids = [0] + list(range(ch_start, ch_end))
    # Fix the seed for the temporal consistency
    generator.manual_seed(seed)
    output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids)
    result.append(output.images[1:])

# Concatenate chunks and save
result = np.concatenate(result)
result = [(r * 255).astype("uint8") for r in result]
imageio.mimsave("video.mp4", result, fps=4)
```


- #### SDXL Support
In order to use the SDXL model when generating a video from prompt, use the `TextToVideoZeroSDXLPipeline` pipeline:

```python
import torch
from diffusers import TextToVideoZeroSDXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = TextToVideoZeroSDXLPipeline.from_pretrained(
    model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")
```

### Text-To-Video with Pose Control
To generate a video from prompt with additional pose control

1. Download a demo video

    ```python
    from huggingface_hub import hf_hub_download

    filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
    ```


2. Read video containing extracted pose images
    ```python
    from PIL import Image
    import imageio

    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
    ```
    To extract pose from actual video, read [ControlNet documentation](controlnet).

3. Run `StableDiffusionControlNetPipeline` with our custom attention processor

    ```python
    import torch
    from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor

    model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
    controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
    pipe = StableDiffusionControlNetPipeline.from_pretrained(
        model_id, controlnet=controlnet, torch_dtype=torch.float16
    ).to("cuda")

    # Set the attention processor
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))

    # fix latents for all frames
    latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)

    prompt = "Darth Vader dancing in a desert"
    result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
    imageio.mimsave("video.mp4", result, fps=4)
    ```
- #### SDXL Support

	Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL:
	```python
	import torch
	from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel
	from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor

	controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0'
	model_id = 'stabilityai/stable-diffusion-xl-base-1.0'

	controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16)
	pipe = StableDiffusionControlNetPipeline.from_pretrained(
		model_id, controlnet=controlnet, torch_dtype=torch.float16
	).to('cuda')

	# Set the attention processor
	pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
	pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))

	# fix latents for all frames
	latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)

	prompt = "Darth Vader dancing in a desert"
	result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
	imageio.mimsave("video.mp4", result, fps=4)
	```

### Text-To-Video with Edge Control

To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using [Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny).


### Video Instruct-Pix2Pix

To perform text-guided video editing (with [InstructPix2Pix](pix2pix)):

1. Download a demo video

    ```python
    from huggingface_hub import hf_hub_download

    filename = "__assets__/pix2pix video/camel.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
    ```

2. Read video from path
    ```python
    from PIL import Image
    import imageio

    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
    ```

3. Run `StableDiffusionInstructPix2PixPipeline` with our custom attention processor
    ```python
    import torch
    from diffusers import StableDiffusionInstructPix2PixPipeline
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor

    model_id = "timbrooks/instruct-pix2pix"
    pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3))

    prompt = "make it Van Gogh Starry Night style"
    result = pipe(prompt=[prompt] * len(video), image=video).images
    imageio.mimsave("edited_video.mp4", result, fps=4)
    ```


### DreamBooth specialization

Methods **Text-To-Video**, **Text-To-Video with Pose Control** and **Text-To-Video with Edge Control**
can run with custom [DreamBooth](../../training/dreambooth) models, as shown below for
[Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny) and
[Avatar style DreamBooth](https://huggingface.co/PAIR/text2video-zero-controlnet-canny-avatar) model:

1. Download a demo video

    ```python
    from huggingface_hub import hf_hub_download

    filename = "__assets__/canny_videos_mp4/girl_turning.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
    ```

2. Read video from path
    ```python
    from PIL import Image
    import imageio

    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
    ```

3. Run `StableDiffusionControlNetPipeline` with custom trained DreamBooth model
    ```python
    import torch
    from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor

    # set model id to custom model
    model_id = "PAIR/text2video-zero-controlnet-canny-avatar"
    controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
    pipe = StableDiffusionControlNetPipeline.from_pretrained(
        model_id, controlnet=controlnet, torch_dtype=torch.float16
    ).to("cuda")

    # Set the attention processor
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))

    # fix latents for all frames
    latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1)

    prompt = "oil painting of a beautiful girl avatar style"
    result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images
    imageio.mimsave("video.mp4", result, fps=4)
    ```

You can filter out some available DreamBooth-trained models with [this link](https://huggingface.co/models?search=dreambooth).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## TextToVideoZeroPipeline[[diffusers.TextToVideoZeroPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TextToVideoZeroPipeline</name><anchor>diffusers.TextToVideoZeroPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L298</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.TextToVideoZeroPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L545</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "video_length", "val": ": typing.Optional[int] = 8"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "motion_field_strength_x", "val": ": float = 12"}, {"name": "motion_field_strength_y", "val": ": float = 12"}, {"name": "output_type", "val": ": typing.Optional[str] = 'tensor'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": typing.Optional[int] = 1"}, {"name": "t0", "val": ": int = 44"}, {"name": "t1", "val": ": int = 47"}, {"name": "frame_ids", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **video_length** (`int`, *optional*, defaults to 8) --
  The number of generated video frames.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in video generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated video. Choose between `"latent"` and `"np"`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a
  [TextToVideoPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video_zero#diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput) instead of
  a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **motion_field_strength_x** (`float`, *optional*, defaults to 12) --
  Strength of motion in generated video along x-axis. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **motion_field_strength_y** (`float`, *optional*, defaults to 12) --
  Strength of motion in generated video along y-axis. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **t0** (`int`, *optional*, defaults to 44) --
  Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **t1** (`int`, *optional*, defaults to 47) --
  Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **frame_ids** (`List[int]`, *optional*) --
  Indexes of the frames that are being generated. This is used when generating longer videos
  chunk-by-chunk.</paramsdesc><paramgroups>0</paramgroups><rettype>[TextToVideoPipelineOutput](/docs/diffusers/main/en/api/pipelines/text_to_video_zero#diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput)</rettype><retdesc>The output contains a `ndarray` of the generated video, when `output_type` != `"latent"`, otherwise a
latent code of generated videos and a list of `bool`s indicating whether the corresponding generated
video contains "not-safe-for-work" (nsfw) content..</retdesc></docstring>

The call function to the pipeline for generation.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>backward_loop</name><anchor>diffusers.TextToVideoZeroPipeline.backward_loop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L390</source><parameters>[{"name": "latents", "val": ""}, {"name": "timesteps", "val": ""}, {"name": "prompt_embeds", "val": ""}, {"name": "guidance_scale", "val": ""}, {"name": "callback", "val": ""}, {"name": "callback_steps", "val": ""}, {"name": "num_warmup_steps", "val": ""}, {"name": "extra_step_kwargs", "val": ""}, {"name": "cross_attention_kwargs", "val": " = None"}]</parameters><paramsdesc>- **latents** --
  Latents at time timesteps[0].
- **timesteps** --
  Time steps along which to perform backward process.
- **prompt_embeds** --
  Pre-generated text embeddings.
- **guidance_scale** --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **extra_step_kwargs** --
  Extra_step_kwargs.
- **cross_attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **num_warmup_steps** --
  number of warmup steps.</paramsdesc><paramgroups>0</paramgroups><rettype>latents</rettype><retdesc>Latents of backward process output at time timesteps[-1].</retdesc></docstring>

Perform backward process given list of time steps.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.TextToVideoZeroPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L817</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward_loop</name><anchor>diffusers.TextToVideoZeroPipeline.forward_loop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L366</source><parameters>[{"name": "x_t0", "val": ""}, {"name": "t0", "val": ""}, {"name": "t1", "val": ""}, {"name": "generator", "val": ""}]</parameters><paramsdesc>- **x_t0** --
  Latent code at time t0.
- **t0** --
  Timestep at t0.
- **t1** --
  Timestamp at t1.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.</paramsdesc><paramgroups>0</paramgroups><rettype>x_t1</rettype><retdesc>Forward process applied to x_t0 from time t0 to t1.</retdesc></docstring>

Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance.








</div></div>

## TextToVideoZeroSDXLPipeline[[diffusers.TextToVideoZeroSDXLPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TextToVideoZeroSDXLPipeline</name><anchor>diffusers.TextToVideoZeroSDXLPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero_sdxl.py#L348</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.TextToVideoZeroSDXLPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero_sdxl.py#L951</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "video_length", "val": ": typing.Optional[int] = 8"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "frame_ids", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "motion_field_strength_x", "val": ": float = 12"}, {"name": "motion_field_strength_y", "val": ": float = 12"}, {"name": "output_type", "val": ": typing.Optional[str] = 'tensor'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "t0", "val": ": int = 44"}, {"name": "t1", "val": ": int = 47"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **video_length** (`int`, *optional*, defaults to 8) --
  The number of generated video frames.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of videos to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **frame_ids** (`List[int]`, *optional*) --
  Indexes of the frames that are being generated. This is used when generating longer videos
  chunk-by-chunk.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **motion_field_strength_x** (`float`, *optional*, defaults to 12) --
  Strength of motion in generated video along x-axis. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **motion_field_strength_y** (`float`, *optional*, defaults to 12) --
  Strength of motion in generated video along y-axis. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.7) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **t0** (`int`, *optional*, defaults to 44) --
  Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.
- **t1** (`int`, *optional*, defaults to 47) --
  Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the
  [paper](https://huggingface.co/papers/2303.13439), Sect. 3.3.1.</paramsdesc><paramgroups>0</paramgroups><retdesc>`~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoSDXLPipelineOutput` or
`tuple`: `~pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoSDXLPipelineOutput`
if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the
generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>backward_loop</name><anchor>diffusers.TextToVideoZeroSDXLPipeline.backward_loop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero_sdxl.py#L862</source><parameters>[{"name": "latents", "val": ""}, {"name": "timesteps", "val": ""}, {"name": "prompt_embeds", "val": ""}, {"name": "guidance_scale", "val": ""}, {"name": "callback", "val": ""}, {"name": "callback_steps", "val": ""}, {"name": "num_warmup_steps", "val": ""}, {"name": "extra_step_kwargs", "val": ""}, {"name": "add_text_embeds", "val": ""}, {"name": "add_time_ids", "val": ""}, {"name": "cross_attention_kwargs", "val": " = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **latents** --
  Latents at time timesteps[0].
- **timesteps** --
  Time steps along which to perform backward process.
- **prompt_embeds** --
  Pre-generated text embeddings.
- **guidance_scale** --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **extra_step_kwargs** --
  Extra_step_kwargs.
- **cross_attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **num_warmup_steps** --
  number of warmup steps.</paramsdesc><paramgroups>0</paramgroups><rettype>latents</rettype><retdesc>latents of backward process output at time timesteps[-1]</retdesc></docstring>

Perform backward process given list of time steps








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.TextToVideoZeroSDXLPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero_sdxl.py#L599</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward_loop</name><anchor>diffusers.TextToVideoZeroSDXLPipeline.forward_loop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero_sdxl.py#L838</source><parameters>[{"name": "x_t0", "val": ""}, {"name": "t0", "val": ""}, {"name": "t1", "val": ""}, {"name": "generator", "val": ""}]</parameters><paramsdesc>- **x_t0** --
  Latent code at time t0.
- **t0** --
  Timestep at t0.
- **t1** --
  Timestamp at t1.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.</paramsdesc><paramgroups>0</paramgroups><rettype>x_t1</rettype><retdesc>Forward process applied to x_t0 from time t0 to t1.</retdesc></docstring>

Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance.








</div></div>

## TextToVideoPipelineOutput[[diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput</name><anchor>diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_zero.py#L197</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`[List[PIL.Image.Image]`, `np.ndarray`]) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`[List[bool]]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for zero-shot text-to-video pipeline.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/text_to_video_zero.md" />

### Hunyuan-DiT
https://huggingface.co/docs/diffusers/main/api/pipelines/hunyuandit.md

# Hunyuan-DiT
![chinese elements understanding](https://github.com/gnobitab/diffusers-hunyuan/assets/1157982/39b99036-c3cb-4f16-bb1a-40ec25eda573)

[Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding](https://huggingface.co/papers/2405.08748) from Tencent Hunyuan.

The abstract from the paper is:

*We present Hunyuan-DiT, a text-to-image diffusion transformer with fine-grained understanding of both English and Chinese. To construct Hunyuan-DiT, we carefully design the transformer structure, text encoder, and positional encoding. We also build from scratch a whole data pipeline to update and evaluate data for iterative model optimization. For fine-grained language understanding, we train a Multimodal Large Language Model to refine the captions of the images. Finally, Hunyuan-DiT can perform multi-turn multimodal dialogue with users, generating and refining images according to the context. Through our holistic human evaluation protocol with more than 50 professional human evaluators, Hunyuan-DiT sets a new state-of-the-art in Chinese-to-image generation compared with other open-source models.*


You can find the original codebase at [Tencent/HunyuanDiT](https://github.com/Tencent/HunyuanDiT) and all the available checkpoints at [Tencent-Hunyuan](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT).

**Highlights**: HunyuanDiT supports Chinese/English-to-image, multi-resolution generation.

HunyuanDiT has the following components:
* It uses a diffusion transformer as the backbone
* It combines two text encoders, a bilingual CLIP and a multilingual T5 encoder

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

> [!TIP]
> You can further improve generation quality by passing the generated image from `HungyuanDiTPipeline` to the [SDXL refiner](../../using-diffusers/sdxl#base-to-refiner-model) model.

## Optimization

You can optimize the pipeline's runtime and memory consumption with torch.compile and feed-forward chunking. To learn about other optimization methods, check out the [Speed up inference](../../optimization/fp16) and [Reduce memory usage](../../optimization/memory) guides.

### Inference

Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency.

First, load the pipeline:

```python
from diffusers import HunyuanDiTPipeline
import torch

pipeline = HunyuanDiTPipeline.from_pretrained(
	"Tencent-Hunyuan/HunyuanDiT-Diffusers", torch_dtype=torch.float16
).to("cuda")
```

Then change the memory layout of the pipelines `transformer` and `vae` components to `torch.channels-last`:

```python
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.vae.to(memory_format=torch.channels_last)
```

Finally, compile the components and run inference:

```python
pipeline.transformer = torch.compile(pipeline.transformer, mode="max-autotune", fullgraph=True)
pipeline.vae.decode = torch.compile(pipeline.vae.decode, mode="max-autotune", fullgraph=True)

image = pipeline(prompt="一个宇航员在骑马").images[0]
```

The [benchmark](https://gist.github.com/sayakpaul/29d3a14905cfcbf611fe71ebd22e9b23) results on a 80GB A100 machine are:

```bash
With torch.compile(): Average inference time: 12.470 seconds.
Without torch.compile(): Average inference time: 20.570 seconds.
```

### Memory optimization

By loading the T5 text encoder in 8 bits, you can run the pipeline in just under 6 GBs of GPU VRAM. Refer to [this script](https://gist.github.com/sayakpaul/3154605f6af05b98a41081aaba5ca43e) for details.

Furthermore, you can use the [enable_forward_chunking()](/docs/diffusers/main/en/api/models/hunyuan_transformer2d#diffusers.HunyuanDiT2DModel.enable_forward_chunking) method to reduce memory usage. Feed-forward chunking runs the feed-forward layers in a transformer block in a loop instead of all at once. This gives you a trade-off between memory consumption and inference runtime.

```diff
+ pipeline.transformer.enable_forward_chunking(chunk_size=1, dim=1)
```


## HunyuanDiTPipeline[[diffusers.HunyuanDiTPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanDiTPipeline</name><anchor>diffusers.HunyuanDiTPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuandit/pipeline_hunyuandit.py#L149</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": BertModel"}, {"name": "tokenizer", "val": ": BertTokenizer"}, {"name": "transformer", "val": ": HunyuanDiT2DModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}, {"name": "text_encoder_2", "val": ": typing.Optional[transformers.models.t5.modeling_t5.T5EncoderModel] = None"}, {"name": "tokenizer_2", "val": ": typing.Optional[transformers.models.mt5.tokenization_mt5.MT5Tokenizer] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. We use
  `sdxl-vae-fp16-fix`.
- **text_encoder** (Optional[`~transformers.BertModel`, `~transformers.CLIPTextModel`]) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  HunyuanDiT uses a fine-tuned [bilingual CLIP].
- **tokenizer** (Optional[`~transformers.BertTokenizer`, `~transformers.CLIPTokenizer`]) --
  A `BertTokenizer` or `CLIPTokenizer` to tokenize text.
- **transformer** ([HunyuanDiT2DModel](/docs/diffusers/main/en/api/models/hunyuan_transformer2d#diffusers.HunyuanDiT2DModel)) --
  The HunyuanDiT model designed by Tencent Hunyuan.
- **text_encoder_2** (`T5EncoderModel`) --
  The mT5 embedder. Specifically, it is 't5-v1_1-xxl'.
- **tokenizer_2** (`MT5Tokenizer`) --
  The tokenizer for the mT5 embedder.
- **scheduler** ([DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler)) --
  A scheduler to be used in combination with HunyuanDiT to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for English/Chinese-to-image generation using HunyuanDiT.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

HunyuanDiT uses two text encoders: [mT5](https://huggingface.co/google/mt5-base) and [bilingual CLIP](fine-tuned by
ourselves)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HunyuanDiTPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuandit/pipeline_hunyuandit.py#L568</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask_2", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = (1024, 1024)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "use_resolution_binning", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`) --
  The height in pixels of the generated image.
- **width** (`int`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **negative_prompt_embeds_2** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds_2` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **negative_prompt_attention_mask_2** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds_2` is passed directly.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback_on_step_end** (`Callable[[int, int, Dict], None]`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A callback function or a list of callback functions to be called at the end of each denoising step.
- **callback_on_step_end_tensor_inputs** (`List[str]`, *optional*) --
  A list of tensor inputs that should be passed to the callback function. If not defined, all tensor
  inputs will be passed.
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Rescale the noise_cfg according to `guidance_rescale`. Based on findings of [Common Diffusion Noise
  Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). See Section 3.4
- **original_size** (`Tuple[int, int]`, *optional*, defaults to `(1024, 1024)`) --
  The original size of the image. Used to calculate the time ids.
- **target_size** (`Tuple[int, int]`, *optional*) --
  The target size of the image. Used to calculate the time ids.
- **crops_coords_top_left** (`Tuple[int, int]`, *optional*, defaults to `(0, 0)`) --
  The top left coordinates of the crop. Used to calculate the time ids.
- **use_resolution_binning** (`bool`, *optional*, defaults to `True`) --
  Whether to use resolution binning or not. If `True`, the input resolution will be mapped to the closest
  standard resolution. Supported resolutions are 1024x1024, 1280x1280, 1024x768, 1152x864, 1280x960,
  768x1024, 864x1152, 960x1280, 1280x768, and 768x1280. It is recommended to set this to `True`.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation with HunyuanDiT.



<ExampleCodeBlock anchor="diffusers.HunyuanDiTPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import HunyuanDiTPipeline

>>> pipe = HunyuanDiTPipeline.from_pretrained(
...     "Tencent-Hunyuan/HunyuanDiT-Diffusers", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")

>>> # You may also use English prompt as HunyuanDiT supports both English and Chinese
>>> # prompt = "An astronaut riding a horse"
>>> prompt = "一个宇航员在骑马"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.HunyuanDiTPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hunyuandit/pipeline_hunyuandit.py#L248</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "device", "val": ": device = None"}, {"name": "dtype", "val": ": dtype = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": typing.Optional[int] = None"}, {"name": "text_encoder_index", "val": ": int = 0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **dtype** (`torch.dtype`) --
  torch dtype
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the prompt. Required when `prompt_embeds` is passed directly.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Attention mask for the negative prompt. Required when `negative_prompt_embeds` is passed directly.
- **max_sequence_length** (`int`, *optional*) -- maximum sequence length to use for the prompt.
- **text_encoder_index** (`int`, *optional*) --
  Index of the text encoder to use. `0` for clip and `1` for T5.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/hunyuandit.md" />

### ControlNet
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet.md

# ControlNet

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This model was contributed by [takuma104](https://huggingface.co/takuma104). ❤️

The original codebase can be found at [lllyasviel/ControlNet](https://github.com/lllyasviel/ControlNet), and you can find official ControlNet checkpoints on [lllyasviel's](https://huggingface.co/lllyasviel) Hub profile.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusionControlNetPipeline[[diffusers.StableDiffusionControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetPipeline</name><anchor>diffusers.StableDiffusionControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet.py#L162</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet.py#L907</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet. When `prompt` is a list, and if a list of images is passed for a single
  ControlNet, each will be paired with each prompt in the `prompt` list. This also applies to multiple
  ControlNets, where a list of image lists can be passed to batch for each prompt and each ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
... )
>>> image = np.array(image)

>>> # get canny image
>>> image = cv2.Canny(image, 100, 200)
>>> image = image[:, :, None]
>>> image = np.concatenate([image, image, image], axis=2)
>>> canny_image = Image.fromarray(image)

>>> # load control net and stable diffusion v1-5
>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
>>> pipe = StableDiffusionControlNetPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> # speed up diffusion process with faster scheduler and memory optimization
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
>>> # remove following line if xformers is not installed
>>> pipe.enable_xformers_memory_efficient_attention()

>>> pipe.enable_model_cpu_offload()

>>> # generate image
>>> generator = torch.manual_seed(0)
>>> image = pipe(
...     "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionControlNetPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet.py#L298</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionControlNetPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet.py#L850</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionControlNetImg2ImgPipeline[[diffusers.StableDiffusionControlNetImg2ImgPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetImg2ImgPipeline</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py#L140</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py#L905</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.8"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The initial image to be used as the starting point for the image generation process. Can also accept
  image latents as `image`, and if passing latents directly they are not encoded again.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetImg2ImgPipeline.__call__.example">

Examples:
```py
>>> # !pip install opencv-python transformers accelerate
>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> import cv2
>>> from PIL import Image

>>> # download an image
>>> image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
... )
>>> np_image = np.array(image)

>>> # get canny image
>>> np_image = cv2.Canny(np_image, 100, 200)
>>> np_image = np_image[:, :, None]
>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2)
>>> canny_image = Image.fromarray(np_image)

>>> # load control net and stable diffusion v1-5
>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> # speed up diffusion process with faster scheduler and memory optimization
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe.enable_model_cpu_offload()

>>> # generate image
>>> generator = torch.manual_seed(0)
>>> image = pipe(
...     "futuristic-looking woman",
...     num_inference_steps=20,
...     generator=generator,
...     image=image,
...     control_image=canny_image,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetImg2ImgPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetImg2ImgPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetImg2ImgPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetImg2ImgPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py#L276</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionControlNetInpaintPipeline[[diffusers.StableDiffusionControlNetInpaintPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionControlNetInpaintPipeline</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py#L128</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet.ControlNetModel, typing.List[diffusers.models.controlnets.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet.ControlNetModel], diffusers.models.controlnets.multicontrolnet.MultiControlNetModel]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) or `List[ControlNetModel]`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image inpainting using Stable Diffusion with ControlNet guidance.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters

> [!TIP] > This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting >
([stable-diffusion-v1-5/stable-diffusion-inpainting](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-inpainting))
> as well as default text-to-image Stable Diffusion checkpoints >
([stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)).
> Default text-to-image Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned
on > those, such as
[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py#L994</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 0.5"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, --
  `List[PIL.Image.Image]`, or `List[np.ndarray]`):
  `Image`, NumPy array or tensor representing an image batch to be used as the starting point. For both
  NumPy array and PyTorch tensor, the expected value range is between `[0, 1]`. If it's a tensor or a
  list or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a NumPy array or
  a list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`. It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, --
  `List[PIL.Image.Image]`, or `List[np.ndarray]`):
  `Image`, NumPy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a NumPy array or PyTorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for PyTorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for NumPy array, it would be for `(B, H, W, 1)`, `(B, H, W)`, `(H,
  W, 1)`, or `(H, W)`.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, --
  `List[List[torch.Tensor]]`, or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 0.5) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **guess_mode** (`bool`, *optional*, defaults to `False`) --
  The ControlNet encoder tries to recognize the content of the input image even if you remove all
  prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetInpaintPipeline.__call__.example">

Examples:
```py
>>> # !pip install transformers accelerate
>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler
>>> from diffusers.utils import load_image
>>> import numpy as np
>>> import torch

>>> init_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png"
... )
>>> init_image = init_image.resize((512, 512))

>>> generator = torch.Generator(device="cpu").manual_seed(1)

>>> mask_image = load_image(
...     "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png"
... )
>>> mask_image = mask_image.resize((512, 512))


>>> def make_canny_condition(image):
...     image = np.array(image)
...     image = cv2.Canny(image, 100, 200)
...     image = image[:, :, None]
...     image = np.concatenate([image, image, image], axis=2)
...     image = Image.fromarray(image)
...     return image


>>> control_image = make_canny_condition(init_image)

>>> controlnet = ControlNetModel.from_pretrained(
...     "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16
... )
>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
... )

>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
>>> pipe.enable_model_cpu_offload()

>>> # generate image
>>> image = pipe(
...     "a handsome man with ray-ban sunglasses",
...     num_inference_steps=20,
...     generator=generator,
...     eta=1.0,
...     image=init_image,
...     mask_image=mask_image,
...     control_image=control_image,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetInpaintPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetInpaintPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetInpaintPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionControlNetInpaintPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionControlNetInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py#L282</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet.md" />

### PixArt-α
https://huggingface.co/docs/diffusers/main/api/pipelines/pixart.md

# PixArt-α

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/header_collage.png)

[PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis](https://huggingface.co/papers/2310.00426) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.

The abstract from the paper is:

*The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α's training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5's training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch.*

You can find the original codebase at [PixArt-alpha/PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha) and all the available checkpoints at [PixArt-alpha](https://huggingface.co/PixArt-alpha).

Some notes about this pipeline:

* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](./dit).
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-alpha/blob/08fbbd281ec96866109bdd2cdb75f2f58fb17610/diffusion/data/datasets/utils.py).
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them.

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Inference with under 8GB GPU VRAM

Run the [PixArtAlphaPipeline](/docs/diffusers/main/en/api/pipelines/pixart#diffusers.PixArtAlphaPipeline) with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let's walk through a full-fledged example.

First, install the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library:

```bash
pip install -U bitsandbytes
```

Then load the text encoder in 8-bit:

```python
from transformers import T5EncoderModel
from diffusers import PixArtAlphaPipeline
import torch

text_encoder = T5EncoderModel.from_pretrained(
    "PixArt-alpha/PixArt-XL-2-1024-MS",
    subfolder="text_encoder",
    load_in_8bit=True,
    device_map="auto",

)
pipe = PixArtAlphaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-XL-2-1024-MS",
    text_encoder=text_encoder,
    transformer=None,
    device_map="auto"
)
```

Now, use the `pipe` to encode a prompt:

```python
with torch.no_grad():
    prompt = "cute cat"
    prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
```

Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up some GPU VRAM:

```python
import gc

def flush():
    gc.collect()
    torch.cuda.empty_cache()

del text_encoder
del pipe
flush()
```

Then compute the latents with the prompt embeddings as inputs:

```python
pipe = PixArtAlphaPipeline.from_pretrained(
    "PixArt-alpha/PixArt-XL-2-1024-MS",
    text_encoder=None,
    torch_dtype=torch.float16,
).to("cuda")

latents = pipe(
    negative_prompt=None,
    prompt_embeds=prompt_embeds,
    negative_prompt_embeds=negative_embeds,
    prompt_attention_mask=prompt_attention_mask,
    negative_prompt_attention_mask=negative_prompt_attention_mask,
    num_images_per_prompt=1,
    output_type="latent",
).images

del pipe.transformer
flush()
```

> [!TIP]
> Notice that while initializing `pipe`, you're setting `text_encoder` to `None` so that it's not loaded.

Once the latents are computed, pass it off to the VAE to decode into a real image:

```python
with torch.no_grad():
    image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0]
image = pipe.image_processor.postprocess(image, output_type="pil")[0]
image.save("cat.png")
```

By deleting components you aren't using and flushing the GPU VRAM, you should be able to run [PixArtAlphaPipeline](/docs/diffusers/main/en/api/pipelines/pixart#diffusers.PixArtAlphaPipeline) with under 8GB GPU VRAM.

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/8bits_cat.png)

If you want a report of your memory-usage, run this [script](https://gist.github.com/sayakpaul/3ae0f847001d342af27018a96f467e4e).

> [!WARNING]
> Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It's recommended to compare the outputs with and without 8-bit.

While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could also specify `load_in_4bit` to bring your memory requirements down even further to under 7GB.

## PixArtAlphaPipeline[[diffusers.PixArtAlphaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PixArtAlphaPipeline</name><anchor>diffusers.PixArtAlphaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py#L241</source><parameters>[{"name": "tokenizer", "val": ": T5Tokenizer"}, {"name": "text_encoder", "val": ": T5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "transformer", "val": ": PixArtTransformer2DModel"}, {"name": "scheduler", "val": ": DPMSolverMultistepScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`T5EncoderModel`) --
  Frozen text-encoder. PixArt-Alpha uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
- **tokenizer** (`T5Tokenizer`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **transformer** ([PixArtTransformer2DModel](/docs/diffusers/main/en/api/models/pixart_transformer2d#diffusers.PixArtTransformer2DModel)) --
  A text conditioned `PixArtTransformer2DModel` to denoise the encoded image latents. Initially published as
  [`Transformer2DModel`](https://huggingface.co/PixArt-alpha/PixArt-XL-2-1024-MS/blob/main/transformer/config.json#L2)
  in the config, but the mismatch can be ignored.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using PixArt-Alpha.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.PixArtAlphaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py#L686</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "clean_caption", "val": ": bool = True"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 120"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 100) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **max_sequence_length** (`int` defaults to 120) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.PixArtAlphaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import PixArtAlphaPipeline

>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too.
>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16)
>>> # Enable memory optimizations.
>>> pipe.enable_model_cpu_offload()

>>> prompt = "A small cactus with a happy face in the Sahara desert."
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.PixArtAlphaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py#L303</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 120"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Alpha, it's should be the embeddings of the ""
  string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 120) -- Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/pixart.md" />

### Visualcloze
https://huggingface.co/docs/diffusers/main/api/pipelines/visualcloze.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# VisualCloze

[VisualCloze: A Universal Image Generation Framework via Visual In-Context Learning](https://huggingface.co/papers/2504.07960) is an innovative in-context learning based universal image generation framework that offers key capabilities:
1. Support for various in-domain tasks
2. Generalization to unseen tasks through in-context learning
3. Unify multiple tasks into one step and generate both target image and intermediate results
4. Support reverse-engineering conditions from target images

## Overview

The abstract from the paper is:

*Recent progress in diffusion models significantly advances various image generation tasks. However, the current mainstream approach remains focused on building task-specific models, which have limited efficiency when supporting a wide range of different needs. While universal models attempt to address this limitation, they face critical challenges, including generalizable task instruction, appropriate task distributions, and unified architectural design. To tackle these challenges, we propose VisualCloze, a universal image generation framework, which supports a wide range of in-domain tasks, generalization to unseen ones, unseen unification of multiple tasks, and reverse generation. Unlike existing methods that rely on language-based task instruction, leading to task ambiguity and weak generalization, we integrate visual in-context learning, allowing models to identify tasks from visual demonstrations. Meanwhile, the inherent sparsity of visual task distributions hampers the learning of transferable knowledge across tasks. To this end, we introduce Graph200K, a graph-structured dataset that establishes various interrelated tasks, enhancing task density and transferable knowledge. Furthermore, we uncover that our unified image generation formulation shared a consistent objective with image infilling, enabling us to leverage the strong generative priors of pre-trained infilling models without modifying the architectures. The codes, dataset, and models are available at https://visualcloze.github.io.*

## Inference

### Model loading

VisualCloze is a two-stage cascade pipeline, containing `VisualClozeGenerationPipeline` and `VisualClozeUpsamplingPipeline`.
- In `VisualClozeGenerationPipeline`, each image is downsampled before concatenating images into a grid layout, avoiding excessively high resolutions. VisualCloze releases two models suitable for diffusers, i.e., [VisualClozePipeline-384](https://huggingface.co/VisualCloze/VisualClozePipeline-384) and [VisualClozePipeline-512](https://huggingface.co/VisualCloze/VisualClozePipeline-384), which downsample images to resolutions of 384 and 512, respectively. 
- `VisualClozeUpsamplingPipeline` uses [SDEdit](https://huggingface.co/papers/2108.01073) to enable high-resolution image synthesis.

The `VisualClozePipeline` integrates both stages to support convenient end-to-end sampling, while also allowing users to utilize each pipeline independently as needed.

### Input Specifications

#### Task and Content Prompts
- Task prompt: Required to describe the generation task intention
- Content prompt: Optional description or caption of the target image
- When content prompt is not needed, pass `None`
- For batch inference, pass `List[str|None]`

#### Image Input Format
- Format: `List[List[Image|None]]`
- Structure:
  - All rows except the last represent in-context examples
  - Last row represents the current query (target image set to `None`)
- For batch inference, pass `List[List[List[Image|None]]]`

#### Resolution Control
- Default behavior:
  - Initial generation in the first stage: area of ${pipe.resolution}^2$
  - Upsampling in the second stage: 3x factor
- Custom resolution: Adjust using `upsampling_height` and `upsampling_width` parameters

### Examples

For comprehensive examples covering a wide range of tasks, please refer to the [Online Demo](https://huggingface.co/spaces/VisualCloze/VisualCloze) and [GitHub Repository](https://github.com/lzyhha/VisualCloze). Below are simple examples for three cases: mask-to-image conversion, edge detection, and subject-driven generation.

#### Example for mask2image

```python
import torch
from diffusers import VisualClozePipeline
from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16)
pipe.to("cuda")

# Load in-context images (make sure the paths are correct and accessible)
image_paths = [
    # in-context examples
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg'),
    ],
    # query with the target image
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg'),
        None, # No image needed for the target image
    ],
]

# Task and content prompt
task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding."
content_prompt = """Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. 
The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. 
Its plumage is a mix of dark brown and golden hues, with intricate feather details. 
The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. 
The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, 
soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, 
tranquil, majestic, wildlife photography."""

# Run the pipeline
image_result = pipe(
    task_prompt=task_prompt,
    content_prompt=content_prompt,
    image=image_paths,
    upsampling_width=1344,
    upsampling_height=768,
    upsampling_strength=0.4,
    guidance_scale=30,
    num_inference_steps=30,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0][0]

# Save the resulting image
image_result.save("visualcloze.png")
```

#### Example for edge-detection

```python
import torch
from diffusers import VisualClozePipeline
from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16)
pipe.to("cuda")

# Load in-context images (make sure the paths are correct and accessible)
image_paths = [
    # in-context examples
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-1_image.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-1_edge.jpg'),
    ],
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-2_image.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-2_edge.jpg'),
    ],
    # query with the target image
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_query_image.jpg'),
        None, # No image needed for the target image
    ],
]

# Task and content prompt
task_prompt = "Each row illustrates a pathway from [IMAGE1] a sharp and beautifully composed photograph to [IMAGE2] edge map with natural well-connected outlines using a clear logical task."
content_prompt = ""

# Run the pipeline
image_result = pipe(
    task_prompt=task_prompt,
    content_prompt=content_prompt,
    image=image_paths,
    upsampling_width=864,
    upsampling_height=1152,
    upsampling_strength=0.4,
    guidance_scale=30,
    num_inference_steps=30,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0][0]

# Save the resulting image
image_result.save("visualcloze.png")
```

#### Example for subject-driven generation

```python
import torch
from diffusers import VisualClozePipeline
from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16)
pipe.to("cuda")

# Load in-context images (make sure the paths are correct and accessible)
image_paths = [
    # in-context examples
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_reference.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_depth.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_image.jpg'),
    ],
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_reference.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_depth.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_image.jpg'),
    ],
    # query with the target image
    [
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_query_reference.jpg'),
        load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_query_depth.jpg'),
        None, # No image needed for the target image
    ],
]

# Task and content prompt
task_prompt = """Each row describes a process that begins with [IMAGE1] an image containing the key object, 
[IMAGE2] depth map revealing gray-toned spatial layers and results in 
[IMAGE3] an image with artistic qualitya high-quality image with exceptional detail."""
content_prompt = """A vintage porcelain collector's item. Beneath a blossoming cherry tree in early spring, 
this treasure is photographed up close, with soft pink petals drifting through the air and vibrant blossoms framing the scene."""

# Run the pipeline
image_result = pipe(
    task_prompt=task_prompt,
    content_prompt=content_prompt,
    image=image_paths,
    upsampling_width=1024,
    upsampling_height=1024,
    upsampling_strength=0.2,
    guidance_scale=30,
    num_inference_steps=30,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0][0]

# Save the resulting image
image_result.save("visualcloze.png")
```

#### Utilize each pipeline independently 

```python
import torch
from diffusers import VisualClozeGenerationPipeline, FluxFillPipeline as VisualClozeUpsamplingPipeline
from diffusers.utils import load_image
from PIL import Image

pipe = VisualClozeGenerationPipeline.from_pretrained(
    "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16
)
pipe.to("cuda")

image_paths = [
    # in-context examples
    [
        load_image(
            "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg"
        ),
        load_image(
            "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg"
        ),
    ],
    # query with the target image
    [
        load_image(
            "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg"
        ),
        None,  # No image needed for the target image
    ],
]
task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding."
content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography."

# Stage 1: Generate initial image
image = pipe(
    task_prompt=task_prompt,
    content_prompt=content_prompt,
    image=image_paths,
    guidance_scale=30,
    num_inference_steps=30,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0),
).images[0][0]

# Stage 2 (optional): Upsample the generated image
pipe_upsample = VisualClozeUpsamplingPipeline.from_pipe(pipe)
pipe_upsample.to("cuda")

mask_image = Image.new("RGB", image.size, (255, 255, 255))

image = pipe_upsample(
    image=image,
    mask_image=mask_image,
    prompt=content_prompt,
    width=1344,
    height=768,
    strength=0.4,
    guidance_scale=30,
    num_inference_steps=30,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0),
).images[0]

image.save("visualcloze.png")
```

## VisualClozePipeline[[diffusers.VisualClozePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.VisualClozePipeline</name><anchor>diffusers.VisualClozePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_combined.py#L89</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "resolution", "val": ": int = 384"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).
- **resolution** (`int`, *optional*, defaults to 384) --
  The resolution of each image when concatenating images from the query and in-context examples.</paramsdesc><paramgroups>0</paramgroups></docstring>

The VisualCloze pipeline for image generation with visual context. Reference:
https://github.com/lzyhha/VisualCloze/tree/main. This pipeline is designed to generate images based on visual
in-context examples.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.VisualClozePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_combined.py#L249</source><parameters>[{"name": "task_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "content_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "upsampling_height", "val": ": typing.Optional[int] = None"}, {"name": "upsampling_width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 30.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "upsampling_strength", "val": ": float = 1.0"}]</parameters><paramsdesc>- **task_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the task intention.
- **content_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the content or caption of the target image to be generated.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`.
- **upsampling_height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image (i.e., output image) after upsampling via SDEdit. By
  default, the image is upsampled by a factor of three, and the base resolution is determined by the
  resolution parameter of the pipeline. When only one of `upsampling_height` or `upsampling_width` is
  specified, the other will be automatically set based on the aspect ratio.
- **upsampling_width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image (i.e., output image) after upsampling via SDEdit. By
  default, the image is upsampled by a factor of three, and the base resolution is determined by the
  resolution parameter of the pipeline. When only one of `upsampling_height` or `upsampling_width` is
  specified, the other will be automatically set based on the aspect ratio.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 30.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.
- **upsampling_strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image` when upsampling the results. Must be between 0 and
  1. The generated image is used as a starting point and more noise is added the higher the
  `upsampling_strength`. The number of denoising steps depends on the amount of noise initially added.
  When `upsampling_strength` is 1, added noise is maximum and the denoising process runs for the full
  number of iterations specified in `num_inference_steps`. A value of 0 skips the upsampling step and
  output the results at the resolution of `self.resolution`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the VisualCloze pipeline for generation.



<ExampleCodeBlock anchor="diffusers.VisualClozePipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import VisualClozePipeline
>>> from diffusers.utils import load_image

>>> image_paths = [
...     # in-context examples
...     [
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg"
...         ),
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg"
...         ),
...     ],
...     # query with the target image
...     [
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg"
...         ),
...         None,  # No image needed for the target image
...     ],
... ]
>>> task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding."
>>> content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography."
>>> pipe = VisualClozePipeline.from_pretrained(
...     "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = pipe(
...     task_prompt=task_prompt,
...     content_prompt=content_prompt,
...     image=image_paths,
...     upsampling_width=1344,
...     upsampling_height=768,
...     upsampling_strength=0.4,
...     guidance_scale=30,
...     num_inference_steps=30,
...     max_sequence_length=512,
...     generator=torch.Generator("cpu").manual_seed(0),
... ).images[0][0]
>>> image.save("visualcloze.png")
```

</ExampleCodeBlock>







</div></div>

## VisualClozeGenerationPipeline[[diffusers.VisualClozeGenerationPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.VisualClozeGenerationPipeline</name><anchor>diffusers.VisualClozeGenerationPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L119</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": T5EncoderModel"}, {"name": "tokenizer_2", "val": ": T5TokenizerFast"}, {"name": "transformer", "val": ": FluxTransformer2DModel"}, {"name": "resolution", "val": ": int = 384"}]</parameters><paramsdesc>- **transformer** ([FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`T5TokenizerFast`) --
  Second Tokenizer of class
  [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast).
- **resolution** (`int`, *optional*, defaults to 384) --
  The resolution of each image when concatenating images from the query and in-context examples.</paramsdesc><paramgroups>0</paramgroups></docstring>

The VisualCloze pipeline for image generation with visual context. Reference:
https://github.com/lzyhha/VisualCloze/tree/main This pipeline is designed to generate images based on visual
in-context examples.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.VisualClozeGenerationPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L708</source><parameters>[{"name": "task_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "content_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 30.0"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **task_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the task intention.
- **content_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the content or caption of the target image to be generated.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 30.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 512) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.flux.FluxPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.flux.FluxPipelineOutput` if `return_dict`
is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated
images.</retdesc></docstring>

Function invoked when calling the VisualCloze pipeline for generation.



<ExampleCodeBlock anchor="diffusers.VisualClozeGenerationPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers import VisualClozeGenerationPipeline, FluxFillPipeline as VisualClozeUpsamplingPipeline
>>> from diffusers.utils import load_image
>>> from PIL import Image

>>> image_paths = [
...     # in-context examples
...     [
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg"
...         ),
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg"
...         ),
...     ],
...     # query with the target image
...     [
...         load_image(
...             "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg"
...         ),
...         None,  # No image needed for the target image
...     ],
... ]
>>> task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding."
>>> content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography."
>>> pipe = VisualClozeGenerationPipeline.from_pretrained(
...     "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = pipe(
...     task_prompt=task_prompt,
...     content_prompt=content_prompt,
...     image=image_paths,
...     guidance_scale=30,
...     num_inference_steps=30,
...     max_sequence_length=512,
...     generator=torch.Generator("cpu").manual_seed(0),
... ).images[0][0]

>>> # optional, upsampling the generated image
>>> pipe_upsample = VisualClozeUpsamplingPipeline.from_pipe(pipe)
>>> pipe_upsample.to("cuda")

>>> mask_image = Image.new("RGB", image.size, (255, 255, 255))

>>> image = pipe_upsample(
...     image=image,
...     mask_image=mask_image,
...     prompt=content_prompt,
...     width=1344,
...     height=768,
...     strength=0.4,
...     guidance_scale=30,
...     num_inference_steps=30,
...     max_sequence_length=512,
...     generator=torch.Generator("cpu").manual_seed(0),
... ).images[0]

>>> image.save("visualcloze.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.VisualClozeGenerationPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L536</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.VisualClozeGenerationPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L563</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.VisualClozeGenerationPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L523</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.VisualClozeGenerationPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L549</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.VisualClozeGenerationPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/visualcloze/pipeline_visualcloze_generation.py#L288</source><parameters>[{"name": "layout_prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "task_prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "content_prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **layout_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the number of in-context examples and the number of images involved in
  the task.
- **task_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the task intention.
- **content_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to define the content or caption of the target image to be generated.
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/visualcloze.md" />

### I2VGen-XL
https://huggingface.co/docs/diffusers/main/api/pipelines/i2vgenxl.md

# I2VGen-XL

[I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models](https://hf.co/papers/2311.04145.pdf) by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou.

The abstract from the paper is:

*Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video's details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at [this https URL](https://i2vgen-xl.github.io/).*

The original codebase can be found [here](https://github.com/ali-vilab/i2vgen-xl/). The model checkpoints can be found [here](https://huggingface.co/ali-vilab/).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the ["Reduce memory usage"] section [here](../../using-diffusers/svd#reduce-memory-usage).

Sample output with I2VGenXL:

<table>
    <tr>
        <td><center>
        library.
        <br>
        <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/i2vgen-xl-example.gif"
            alt="library"
            style="width: 300px;" />
        </center></td>
    </tr>
</table>

## Notes

* I2VGenXL always uses a `clip_skip` value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP.
* It can generate videos of quality that is often on par with [Stable Video Diffusion](../../using-diffusers/svd) (SVD).
* Unlike SVD, it additionally accepts text prompts as inputs.
* It can generate higher resolution videos.
* When using the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler) (which is default for this pipeline), less than 50 steps for inference leads to bad results.
* This implementation is 1-stage variant of I2VGenXL. The main figure in the [I2VGen-XL](https://huggingface.co/papers/2311.04145) paper shows a 2-stage variant, however, 1-stage variant works well. See [this discussion](https://github.com/huggingface/diffusers/discussions/7952) for more details.

## I2VGenXLPipeline[[diffusers.I2VGenXLPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.I2VGenXLPipeline</name><anchor>diffusers.I2VGenXLPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/i2vgen_xl/pipeline_i2vgen_xl.py#L99</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "unet", "val": ": I2VGenXLUNet"}, {"name": "scheduler", "val": ": DDIMScheduler"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.I2VGenXLPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/i2vgen_xl/pipeline_i2vgen_xl.py#L510</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = 704"}, {"name": "width", "val": ": typing.Optional[int] = 1280"}, {"name": "target_fps", "val": ": typing.Optional[int] = 16"}, {"name": "num_frames", "val": ": int = 16"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 9.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "decode_chunk_size", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = 1"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.Tensor`) --
  Image or images to guide image generation. If you provide a tensor, it needs to be compatible with
  [`CLIPImageProcessor`](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json).
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **target_fps** (`int`, *optional*) --
  Frames per second. The rate at which the generated images shall be exported to a video after
  generation. This is also used as a "micro-condition" while generation.
- **num_frames** (`int`, *optional*) --
  The number of video frames to generate.
- **num_inference_steps** (`int`, *optional*) --
  The number of denoising steps.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **num_videos_per_prompt** (`int`, *optional*) --
  The number of images to generate per prompt.
- **decode_chunk_size** (`int`, *optional*) --
  The number of frames to decode at a time. The higher the chunk size, the higher the temporal
  consistency between frames, but also the higher the memory consumption. By default, the decoder will
  decode all frames at once for maximal quality. Reduce `decode_chunk_size` to reduce memory usage.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput](/docs/diffusers/main/en/api/pipelines/i2vgenxl#diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput](/docs/diffusers/main/en/api/pipelines/i2vgenxl#diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput) is
returned, otherwise a `tuple` is returned where the first element is a list with the generated frames.</retdesc></docstring>

The call function to the pipeline for image-to-video generation with [I2VGenXLPipeline](/docs/diffusers/main/en/api/pipelines/i2vgenxl#diffusers.I2VGenXLPipeline).



<ExampleCodeBlock anchor="diffusers.I2VGenXLPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import I2VGenXLPipeline
>>> from diffusers.utils import export_to_gif, load_image

>>> pipeline = I2VGenXLPipeline.from_pretrained(
...     "ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16"
... )
>>> pipeline.enable_model_cpu_offload()

>>> image_url = (
...     "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png"
... )
>>> image = load_image(image_url).convert("RGB")

>>> prompt = "Papers were floating in the air on a table in the library"
>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms"
>>> generator = torch.manual_seed(8888)

>>> frames = pipeline(
...     prompt=prompt,
...     image=image,
...     num_inference_steps=50,
...     negative_prompt=negative_prompt,
...     guidance_scale=9.0,
...     generator=generator,
... ).frames[0]
>>> video_path = export_to_gif(frames, "i2v.gif")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.I2VGenXLPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/i2vgen_xl/pipeline_i2vgen_xl.py#L162</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_videos_per_prompt", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_videos_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## I2VGenXLPipelineOutput[[diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput</name><anchor>diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/i2vgen_xl/pipeline_i2vgen_xl.py#L84</source><parameters>[{"name": "frames", "val": ": typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]]"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image-to-video pipeline.



PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
`(batch_size, num_frames, channels, height, width)`


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/i2vgenxl.md" />

### Skyreels V2
https://huggingface.co/docs/diffusers/main/api/pipelines/skyreels_v2.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

<div style="float: right;">
  <div class="flex flex-wrap space-x-1">
    <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener">
      <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
    </a>
  </div>
</div>

# SkyReels-V2: Infinite-length Film Generative model

[SkyReels-V2](https://huggingface.co/papers/2504.13074) by the SkyReels Team from Skywork AI.

*Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation. To address these limitations, we propose SkyReels-V2, an Infinite-length Film Generative Model, that synergizes Multi-modal Large Language Model (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing Framework. Firstly, we design a comprehensive structural representation of video that combines the general descriptions by the Multi-modal LLM and the detailed shot language by sub-expert models. Aided with human annotation, we then train a unified Video Captioner, named SkyCaptioner-V1, to efficiently label the video data. Secondly, we establish progressive-resolution pretraining for the fundamental video generation, followed by a four-stage post-training enhancement: Initial concept-balanced Supervised Fine-Tuning (SFT) improves baseline quality; Motion-specific Reinforcement Learning (RL) training with human-annotated and synthetic distortion data addresses dynamic artifacts; Our diffusion forcing framework with non-decreasing noise schedules enables long-video synthesis in an efficient search space; Final high-quality SFT refines visual fidelity. All the code and models are available at [this https URL](https://github.com/SkyworkAI/SkyReels-V2).*

You can find all the original SkyReels-V2 checkpoints under the [Skywork](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) organization.

The following SkyReels-V2 models are supported in Diffusers:
- [SkyReels-V2 DF 1.3B - 540P](https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers)
- [SkyReels-V2 DF 14B - 540P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P-Diffusers)
- [SkyReels-V2 DF 14B - 720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P-Diffusers)
- [SkyReels-V2 T2V 14B - 540P](https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P-Diffusers)
- [SkyReels-V2 T2V 14B - 720P](https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P-Diffusers)
- [SkyReels-V2 I2V 1.3B - 540P](https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P-Diffusers)
- [SkyReels-V2 I2V 14B - 540P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P-Diffusers)
- [SkyReels-V2 I2V 14B - 720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P-Diffusers)
- [SkyReels-V2 FLF2V 1.3B - 540P](https://huggingface.co/Skywork/SkyReels-V2-FLF2V-1.3B-540P-Diffusers)

> [!TIP]
> Click on the SkyReels-V2 models in the right sidebar for more examples of video generation.

### A _Visual_ Demonstration

The example below has the following parameters:

- `base_num_frames=97`
- `num_frames=97`
- `num_inference_steps=30`
- `ar_step=5`
- `causal_block_size=5`

With `vae_scale_factor_temporal=4`, expect `5` blocks of `5` frames each as calculated by:

`num_latent_frames: (97-1)//vae_scale_factor_temporal+1 = 25 frames -> 5 blocks of 5 frames each`

And the maximum context length in the latent space is calculated with `base_num_latent_frames`:

`base_num_latent_frames = (97-1)//vae_scale_factor_temporal+1 = 25 -> 25//5 = 5 blocks`

Asynchronous Processing Timeline:
```text
┌─────────────────────────────────────────────────────────────────┐
│ Steps:    1    6   11   16   21   26   31   36   41   46   50   │
│ Block 1: [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■]                       │
│ Block 2:      [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■]                  │
│ Block 3:           [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■]             │
│ Block 4:                [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■]        │
│ Block 5:                     [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■]   │
└─────────────────────────────────────────────────────────────────┘
```

For Long Videos (`num_frames` > `base_num_frames`):
`base_num_frames` acts as the "sliding window size" for processing long videos.

Example: `257`-frame video with `base_num_frames=97`, `overlap_history=17`
```text
┌──── Iteration 1 (frames 1-97) ────┐
│ Processing window: 97 frames      │ → 5 blocks,
│ Generates: frames 1-97            │   async processing
└───────────────────────────────────┘
            ┌────── Iteration 2 (frames 81-177) ──────┐
            │ Processing window: 97 frames            │
            │ Overlap: 17 frames (81-97) from prev    │ → 5 blocks,
            │ Generates: frames 98-177                │   async processing
            └─────────────────────────────────────────┘
                        ┌────── Iteration 3 (frames 161-257) ──────┐
                        │ Processing window: 97 frames             │
                        │ Overlap: 17 frames (161-177) from prev   │ → 5 blocks,
                        │ Generates: frames 178-257                │   async processing
                        └──────────────────────────────────────────┘
```

Each iteration independently runs the asynchronous processing with its own `5` blocks.
`base_num_frames` controls:
1. Memory usage (larger window = more VRAM)
2. Model context length (must match training constraints)
3. Number of blocks per iteration (`base_num_latent_frames // causal_block_size`)

Each block takes `30` steps to complete denoising.
Block N starts at step: `1 + (N-1) x ar_step`
Total steps: `30 + (5-1) x 5 = 50` steps


Synchronous mode (`ar_step=0`) would process all blocks/frames simultaneously:
```text
┌──────────────────────────────────────────────┐
│ Steps:       1            ...            30  │
│ All blocks: [■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■] │
└──────────────────────────────────────────────┘
```
Total steps: `30` steps


An example on how the step matrix is constructed for asynchronous processing:
Given the parameters: (`num_inference_steps=30, flow_shift=8, num_frames=97, ar_step=5, causal_block_size=5`)
```
- num_latent_frames = (97 frames - 1) // (4 temporal downsampling) + 1 = 25
- step_template = [999, 995, 991, 986, 980, 975, 969, 963, 956, 948,
                   941, 932, 922, 912, 901, 888, 874, 859, 841, 822,
                   799, 773, 743, 708, 666, 615, 551, 470, 363, 216]
```

The algorithm creates a `50x25` `step_matrix` where:
```
- Row 1:  [999×5, 999×5, 999×5, 999×5, 999×5]
- Row 2:  [995×5, 999×5, 999×5, 999×5, 999×5]
- Row 3:  [991×5, 999×5, 999×5, 999×5, 999×5]
- ...
- Row 7:  [969×5, 995×5, 999×5, 999×5, 999×5]
- ...
- Row 21: [799×5, 888×5, 941×5, 975×5, 999×5]
- ...
- Row 35: [  0×5, 216×5, 666×5, 822×5, 901×5]
- ...
- Row 42: [  0×5,   0×5,   0×5, 551×5, 773×5]
- ...
- Row 50: [  0×5,   0×5,   0×5,   0×5, 216×5]
```

Detailed Row `6` Analysis:
```
- step_matrix[5]:      [ 975×5,  999×5,   999×5,   999×5,   999×5]
- step_index[5]:       [   6×5,    1×5,     0×5,     0×5,     0×5]
- step_update_mask[5]: [True×5, True×5, False×5, False×5, False×5]
- valid_interval[5]:   (0, 25)
```

Key Pattern: Block `i` lags behind Block `i-1` by exactly `ar_step=5` timesteps, creating the
staggered "diffusion forcing" effect where later blocks condition on cleaner earlier blocks.


### Text-to-Video Generation

The example below demonstrates how to generate a video from text.

<hfoptions id="T2V usage">
<hfoption id="T2V memory">

Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques.

From the original repo:
>You can use --ar_step 5 to enable asynchronous inference. When asynchronous inference, --causal_block_size 5 is recommended while it is not supposed to be set for synchronous generation... Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.

```py
import torch
from diffusers import AutoModel, SkyReelsV2DiffusionForcingPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video


model_id = "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers"
vae = AutoModel.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)

pipeline = SkyReelsV2DiffusionForcingPipeline.from_pretrained(
    model_id,
    vae=vae,
    torch_dtype=torch.bfloat16,
)
pipeline.to("cuda")
flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

output = pipeline(
    prompt=prompt,
    num_inference_steps=30,
    height=544,  # 720 for 720P
    width=960,   # 1280 for 720P
    num_frames=97,
    base_num_frames=97,  # 121 for 720P
    ar_step=5,  # Controls asynchronous inference (0 for synchronous mode)
    causal_block_size=5,  # Number of frames in each block for asynchronous processing
    overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos; 17 for long video generations
    addnoise_condition=20,  # Improves consistency in long video generation
).frames[0]
export_to_video(output, "video.mp4", fps=24, quality=8)
```

</hfoption>
</hfoptions>

### First-Last-Frame-to-Video Generation

The example below demonstrates how to use the image-to-video pipeline to generate a video using a text description, a starting frame, and an ending frame.

<hfoptions id="FLF2V usage">
<hfoption id="usage">

```python
import numpy as np
import torch
import torchvision.transforms.functional as TF
from diffusers import AutoencoderKLWan, SkyReelsV2DiffusionForcingImageToVideoPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image


model_id = "Skywork/SkyReels-V2-DF-1.3B-720P-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipeline = SkyReelsV2DiffusionForcingImageToVideoPipeline.from_pretrained(
    model_id, vae=vae, torch_dtype=torch.bfloat16
)
pipeline.to("cuda")
flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config, flow_shift=flow_shift)

first_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png")
last_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png")

def aspect_ratio_resize(image, pipeline, max_area=720 * 1280):
    aspect_ratio = image.height / image.width
    mod_value = pipeline.vae_scale_factor_spatial * pipeline.transformer.config.patch_size[1]
    height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
    width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
    image = image.resize((width, height))
    return image, height, width

def center_crop_resize(image, height, width):
    # Calculate resize ratio to match first frame dimensions
    resize_ratio = max(width / image.width, height / image.height)

    # Resize the image
    width = round(image.width * resize_ratio)
    height = round(image.height * resize_ratio)
    size = [width, height]
    image = TF.center_crop(image, size)

    return image, height, width

first_frame, height, width = aspect_ratio_resize(first_frame, pipeline)
if last_frame.size != first_frame.size:
    last_frame, _, _ = center_crop_resize(last_frame, height, width)

prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."

output = pipeline(
    image=first_frame, last_image=last_frame, prompt=prompt, height=height, width=width, guidance_scale=5.0
).frames[0]
export_to_video(output, "video.mp4", fps=24, quality=8)
```

</hfoption>
</hfoptions>


### Video-to-Video Generation

<hfoptions id="V2V usage">
<hfoption id="usage">

`SkyReelsV2DiffusionForcingVideoToVideoPipeline` extends a given video.

```python
import numpy as np
import torch
import torchvision.transforms.functional as TF
from diffusers import AutoencoderKLWan, SkyReelsV2DiffusionForcingVideoToVideoPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_video


model_id = "Skywork/SkyReels-V2-DF-1.3B-720P-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipeline = SkyReelsV2DiffusionForcingVideoToVideoPipeline.from_pretrained(
    model_id, vae=vae, torch_dtype=torch.bfloat16
)
pipeline.to("cuda")
flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config, flow_shift=flow_shift)

video = load_video("input_video.mp4")

prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."

output = pipeline(
    video=video, prompt=prompt, height=720, width=1280, guidance_scale=5.0, overlap_history=17,
    num_inference_steps=30, num_frames=257, base_num_frames=121#, ar_step=5, causal_block_size=5,
).frames[0]
export_to_video(output, "video.mp4", fps=24, quality=8)
# Total frames will be the number of frames of the given video + 257
```

</hfoption>
</hfoptions>

## Notes

- SkyReels-V2 supports LoRAs with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.SkyReelsV2LoraLoaderMixin.load_lora_weights).

`SkyReelsV2Pipeline` and `SkyReelsV2ImageToVideoPipeline` are also available without Diffusion Forcing framework applied.


## SkyReelsV2DiffusionForcingPipeline[[diffusers.SkyReelsV2DiffusionForcingPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2DiffusionForcingPipeline</name><anchor>diffusers.SkyReelsV2DiffusionForcingPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing.py#L129</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": SkyReelsV2Transformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": UniPCMultistepScheduler"}]</parameters><paramsdesc>- **tokenizer** (`AutoTokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`UMT5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Text-to-Video (t2v) generation using SkyReels-V2 with diffusion forcing.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a specific device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SkyReelsV2DiffusionForcingPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing.py#L598</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 544"}, {"name": "width", "val": ": int = 960"}, {"name": "num_frames", "val": ": int = 97"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "overlap_history", "val": ": typing.Optional[int] = None"}, {"name": "addnoise_condition", "val": ": float = 0"}, {"name": "base_num_frames", "val": ": int = 97"}, {"name": "ar_step", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": typing.Optional[int] = None"}, {"name": "fps", "val": ": int = 24"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, defaults to `544`) --
  The height of the generated video.
- **width** (`int`, defaults to `960`) --
  The width of the generated video.
- **num_frames** (`int`, defaults to `97`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `6.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality. (**6.0 for T2V**, **5.0 for I2V**)
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `SkyReelsV2PipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, *optional*, defaults to `512`) --
  The maximum sequence length of the prompt.
- **overlap_history** (`int`, *optional*, defaults to `None`) --
  Number of frames to overlap for smooth transitions in long videos. If `None`, the pipeline assumes
  short video generation mode, and no overlap is applied. 17 and 37 are recommended to set.
- **addnoise_condition** (`float`, *optional*, defaults to `0`) --
  This is used to help smooth the long video generation by adding some noise to the clean condition. Too
  large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger
  ones, but it is recommended to not exceed 50.
- **base_num_frames** (`int`, *optional*, defaults to `97`) --
  97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**)
- **ar_step** (`int`, *optional*, defaults to `0`) --
  Controls asynchronous inference (0 for synchronous mode) You can set `ar_step=5` to enable asynchronous
  inference. When asynchronous inference, `causal_block_size=5` is recommended while it is not supposed
  to be set for synchronous generation. Asynchronous inference will take more steps to diffuse the whole
  sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous
  inference may improve the instruction following and visual consistent performance.
- **causal_block_size** (`int`, *optional*, defaults to `None`) --
  The number of frames in each block/chunk. Recommended when using asynchronous inference (when ar_step >
  0)
- **fps** (`int`, *optional*, defaults to `24`) --
  Frame rate of the generated video</paramsdesc><paramgroups>0</paramgroups><rettype>`~SkyReelsV2PipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `SkyReelsV2PipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SkyReelsV2DiffusionForcingPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import (
...     SkyReelsV2DiffusionForcingPipeline,
...     UniPCMultistepScheduler,
...     AutoencoderKLWan,
... )
>>> from diffusers.utils import export_to_video

>>> # Load the pipeline
>>> # Available models:
>>> # - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
>>> vae = AutoencoderKLWan.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     subfolder="vae",
...     torch_dtype=torch.float32,
... )
>>> pipe = SkyReelsV2DiffusionForcingPipeline.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     vae=vae,
...     torch_dtype=torch.bfloat16,
... )
>>> flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe = pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

>>> output = pipe(
...     prompt=prompt,
...     num_inference_steps=30,
...     height=544,
...     width=960,
...     guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
...     num_frames=97,
...     ar_step=5,  # Controls asynchronous inference (0 for synchronous mode)
...     causal_block_size=5,  # Number of frames processed together in a causal block
...     overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
...     addnoise_condition=20,  # Improves consistency in long video generation
... ).frames[0]
>>> export_to_video(output, "video.mp4", fps=24, quality=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SkyReelsV2DiffusionForcingPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing.py#L219</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_timestep_matrix</name><anchor>diffusers.SkyReelsV2DiffusionForcingPipeline.generate_timestep_matrix</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing.py#L418</source><parameters>[{"name": "num_latent_frames", "val": ": int"}, {"name": "step_template", "val": ": Tensor"}, {"name": "base_num_latent_frames", "val": ": int"}, {"name": "ar_step", "val": ": int = 5"}, {"name": "num_pre_ready", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": int = 1"}, {"name": "shrink_interval_with_mask", "val": ": bool = False"}]</parameters><paramsdesc>- **num_latent_frames** (int) -- Total number of latent frames to generate
- **step_template** (torch.Tensor) -- Base timestep schedule (e.g., [1000, 800, 600, ..., 0])
- **base_num_latent_frames** (int) -- Maximum frames the model can process in one forward pass
- **ar_step** (int, optional) -- Autoregressive step size for temporal lag.
  0 = synchronous, >0 = asynchronous. Defaults to 5.
- **num_pre_ready** (int, optional) --
  Number of frames already denoised (e.g., from prefix in a video2video task).
  Defaults to 0.
- **causal_block_size** (int, optional) -- Number of frames processed as a causal block.
  Defaults to 1.
- **shrink_interval_with_mask** (bool, optional) -- Whether to optimize processing intervals.
  Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple containing</rettype><retdesc>- step_matrix (torch.Tensor): Matrix of timesteps for each frame at each iteration Shape:
  [num_iterations, num_latent_frames]
- step_index (torch.Tensor): Index matrix for timestep lookup Shape: [num_iterations,
  num_latent_frames]
- step_update_mask (torch.Tensor): Boolean mask indicating which frames to update Shape:
  [num_iterations, num_latent_frames]
- valid_interval (list[tuple]): List of (start, end) intervals for each iteration</retdesc><raises>- ``ValueError`` -- If ar_step is too small for the given configuration</raises><raisederrors>``ValueError``</raisederrors></docstring>

This function implements the core diffusion forcing algorithm that creates a coordinated denoising schedule
across temporal frames. It supports both synchronous and asynchronous generation modes:

**Synchronous Mode** (ar_step=0, causal_block_size=1):
- All frames are denoised simultaneously at each timestep
- Each frame follows the same denoising trajectory: [1000, 800, 600, ..., 0]
- Simpler but may have less temporal consistency for long videos

**Asynchronous Mode** (ar_step>0, causal_block_size>1):
- Frames are grouped into causal blocks and processed block/chunk-wise
- Each block is denoised in a staggered pattern creating a "denoising wave"
- Earlier blocks are more denoised, later blocks lag behind by ar_step timesteps
- Creates stronger temporal dependencies and better consistency












</div></div>

## SkyReelsV2DiffusionForcingImageToVideoPipeline[[diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline</name><anchor>diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_i2v.py#L134</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": SkyReelsV2Transformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": UniPCMultistepScheduler"}]</parameters><paramsdesc>- **tokenizer** (`AutoTokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`UMT5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Image-to-Video (i2v) generation using SkyReels-V2 with diffusion forcing.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a specific device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_i2v.py#L644</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 544"}, {"name": "width", "val": ": int = 960"}, {"name": "num_frames", "val": ": int = 97"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "last_image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "overlap_history", "val": ": typing.Optional[int] = None"}, {"name": "addnoise_condition", "val": ": float = 0"}, {"name": "base_num_frames", "val": ": int = 97"}, {"name": "ar_step", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": typing.Optional[int] = None"}, {"name": "fps", "val": ": int = 24"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, defaults to `544`) --
  The height of the generated video.
- **width** (`int`, defaults to `960`) --
  The width of the generated video.
- **num_frames** (`int`, defaults to `97`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality. (**6.0 for T2V**, **5.0 for I2V**)
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `negative_prompt` input argument.
- **image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings. Can be used to easily tweak image inputs (weighting). If not provided,
  image embeddings are generated from the `image` input argument.
- **last_image** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings. Can be used to easily tweak image inputs (weighting). If not provided,
  image embeddings are generated from the `image` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `SkyReelsV2PipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, *optional*, defaults to `512`) --
  The maximum sequence length of the prompt.
- **overlap_history** (`int`, *optional*, defaults to `None`) --
  Number of frames to overlap for smooth transitions in long videos. If `None`, the pipeline assumes
  short video generation mode, and no overlap is applied. 17 and 37 are recommended to set.
- **addnoise_condition** (`float`, *optional*, defaults to `0`) --
  This is used to help smooth the long video generation by adding some noise to the clean condition. Too
  large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger
  ones, but it is recommended to not exceed 50.
- **base_num_frames** (`int`, *optional*, defaults to `97`) --
  97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**)
- **ar_step** (`int`, *optional*, defaults to `0`) --
  Controls asynchronous inference (0 for synchronous mode) You can set `ar_step=5` to enable asynchronous
  inference. When asynchronous inference, `causal_block_size=5` is recommended while it is not supposed
  to be set for synchronous generation. Asynchronous inference will take more steps to diffuse the whole
  sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous
  inference may improve the instruction following and visual consistent performance.
- **causal_block_size** (`int`, *optional*, defaults to `None`) --
  The number of frames in each block/chunk. Recommended when using asynchronous inference (when ar_step >
  0)
- **fps** (`int`, *optional*, defaults to `24`) --
  Frame rate of the generated video</paramsdesc><paramgroups>0</paramgroups><rettype>`~SkyReelsV2PipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `SkyReelsV2PipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import (
...     SkyReelsV2DiffusionForcingImageToVideoPipeline,
...     UniPCMultistepScheduler,
...     AutoencoderKLWan,
... )
>>> from diffusers.utils import export_to_video
>>> from PIL import Image

>>> # Load the pipeline
>>> # Available models:
>>> # - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
>>> vae = AutoencoderKLWan.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     subfolder="vae",
...     torch_dtype=torch.float32,
... )
>>> pipe = SkyReelsV2DiffusionForcingImageToVideoPipeline.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     vae=vae,
...     torch_dtype=torch.bfloat16,
... )
>>> flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe = pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."
>>> image = Image.open("path/to/image.png")

>>> output = pipe(
...     image=image,
...     prompt=prompt,
...     num_inference_steps=50,
...     height=544,
...     width=960,
...     guidance_scale=5.0,  # 6.0 for T2V, 5.0 for I2V
...     num_frames=97,
...     ar_step=0,  # Controls asynchronous inference (0 for synchronous mode)
...     overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
...     addnoise_condition=20,  # Improves consistency in long video generation
... ).frames[0]
>>> export_to_video(output, "video.mp4", fps=24, quality=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_i2v.py#L224</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_timestep_matrix</name><anchor>diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline.generate_timestep_matrix</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_i2v.py#L464</source><parameters>[{"name": "num_latent_frames", "val": ": int"}, {"name": "step_template", "val": ": Tensor"}, {"name": "base_num_latent_frames", "val": ": int"}, {"name": "ar_step", "val": ": int = 5"}, {"name": "num_pre_ready", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": int = 1"}, {"name": "shrink_interval_with_mask", "val": ": bool = False"}]</parameters><paramsdesc>- **num_latent_frames** (int) -- Total number of latent frames to generate
- **step_template** (torch.Tensor) -- Base timestep schedule (e.g., [1000, 800, 600, ..., 0])
- **base_num_latent_frames** (int) -- Maximum frames the model can process in one forward pass
- **ar_step** (int, optional) -- Autoregressive step size for temporal lag.
  0 = synchronous, >0 = asynchronous. Defaults to 5.
- **num_pre_ready** (int, optional) --
  Number of frames already denoised (e.g., from prefix in a video2video task).
  Defaults to 0.
- **causal_block_size** (int, optional) -- Number of frames processed as a causal block.
  Defaults to 1.
- **shrink_interval_with_mask** (bool, optional) -- Whether to optimize processing intervals.
  Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple containing</rettype><retdesc>- step_matrix (torch.Tensor): Matrix of timesteps for each frame at each iteration Shape:
  [num_iterations, num_latent_frames]
- step_index (torch.Tensor): Index matrix for timestep lookup Shape: [num_iterations,
  num_latent_frames]
- step_update_mask (torch.Tensor): Boolean mask indicating which frames to update Shape:
  [num_iterations, num_latent_frames]
- valid_interval (list[tuple]): List of (start, end) intervals for each iteration</retdesc><raises>- ``ValueError`` -- If ar_step is too small for the given configuration</raises><raisederrors>``ValueError``</raisederrors></docstring>

This function implements the core diffusion forcing algorithm that creates a coordinated denoising schedule
across temporal frames. It supports both synchronous and asynchronous generation modes:

**Synchronous Mode** (ar_step=0, causal_block_size=1):
- All frames are denoised simultaneously at each timestep
- Each frame follows the same denoising trajectory: [1000, 800, 600, ..., 0]
- Simpler but may have less temporal consistency for long videos

**Asynchronous Mode** (ar_step>0, causal_block_size>1):
- Frames are grouped into causal blocks and processed block/chunk-wise
- Each block is denoised in a staggered pattern creating a "denoising wave"
- Earlier blocks are more denoised, later blocks lag behind by ar_step timesteps
- Creates stronger temporal dependencies and better consistency












</div></div>

## SkyReelsV2DiffusionForcingVideoToVideoPipeline[[diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline</name><anchor>diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_v2v.py#L190</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": SkyReelsV2Transformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": UniPCMultistepScheduler"}]</parameters><paramsdesc>- **tokenizer** (`AutoTokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`UMT5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel)) --
  Conditional Transformer to denoise the encoded image latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Video-to-Video (v2v) generation using SkyReels-V2 with diffusion forcing.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a specific device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_v2v.py#L682</source><parameters>[{"name": "video", "val": ": typing.List[PIL.Image.Image]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 544"}, {"name": "width", "val": ": int = 960"}, {"name": "num_frames", "val": ": int = 120"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}, {"name": "overlap_history", "val": ": typing.Optional[int] = None"}, {"name": "addnoise_condition", "val": ": float = 0"}, {"name": "base_num_frames", "val": ": int = 97"}, {"name": "ar_step", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": typing.Optional[int] = None"}, {"name": "fps", "val": ": int = 24"}]</parameters><paramsdesc>- **video** (`List[Image.Image]`) --
  The video to guide the video generation.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the video generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the video generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, defaults to `544`) --
  The height of the generated video.
- **width** (`int`, defaults to `960`) --
  The width of the generated video.
- **num_frames** (`int`, defaults to `120`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `6.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality. (**6.0 for T2V**, **5.0 for I2V**)
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `SkyReelsV2PipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, *optional*, defaults to `512`) --
  The maximum sequence length of the prompt.
- **overlap_history** (`int`, *optional*, defaults to `None`) --
  Number of frames to overlap for smooth transitions in long videos. If `None`, the pipeline assumes
  short video generation mode, and no overlap is applied. 17 and 37 are recommended to set.
- **addnoise_condition** (`float`, *optional*, defaults to `0`) --
  This is used to help smooth the long video generation by adding some noise to the clean condition. Too
  large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger
  ones, but it is recommended to not exceed 50.
- **base_num_frames** (`int`, *optional*, defaults to `97`) --
  97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**)
- **ar_step** (`int`, *optional*, defaults to `0`) --
  Controls asynchronous inference (0 for synchronous mode) You can set `ar_step=5` to enable asynchronous
  inference. When asynchronous inference, `causal_block_size=5` is recommended while it is not supposed
  to be set for synchronous generation. Asynchronous inference will take more steps to diffuse the whole
  sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous
  inference may improve the instruction following and visual consistent performance.
- **causal_block_size** (`int`, *optional*, defaults to `None`) --
  The number of frames in each block/chunk. Recommended when using asynchronous inference (when ar_step >
  0)
- **fps** (`int`, *optional*, defaults to `24`) --
  Frame rate of the generated video</paramsdesc><paramgroups>0</paramgroups><rettype>`~SkyReelsV2PipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `SkyReelsV2PipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import (
...     SkyReelsV2DiffusionForcingVideoToVideoPipeline,
...     UniPCMultistepScheduler,
...     AutoencoderKLWan,
... )
>>> from diffusers.utils import export_to_video

>>> # Load the pipeline
>>> # Available models:
>>> # - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
>>> vae = AutoencoderKLWan.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     subfolder="vae",
...     torch_dtype=torch.float32,
... )
>>> pipe = SkyReelsV2DiffusionForcingVideoToVideoPipeline.from_pretrained(
...     "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
...     vae=vae,
...     torch_dtype=torch.bfloat16,
... )
>>> flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe = pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

>>> output = pipe(
...     prompt=prompt,
...     num_inference_steps=50,
...     height=544,
...     width=960,
...     guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
...     num_frames=97,
...     ar_step=0,  # Controls asynchronous inference (0 for synchronous mode)
...     overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
...     addnoise_condition=20,  # Improves consistency in long video generation
... ).frames[0]
>>> export_to_video(output, "video.mp4", fps=24, quality=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_v2v.py#L280</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>generate_timestep_matrix</name><anchor>diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline.generate_timestep_matrix</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_diffusion_forcing_v2v.py#L502</source><parameters>[{"name": "num_latent_frames", "val": ": int"}, {"name": "step_template", "val": ": Tensor"}, {"name": "base_num_latent_frames", "val": ": int"}, {"name": "ar_step", "val": ": int = 5"}, {"name": "num_pre_ready", "val": ": int = 0"}, {"name": "causal_block_size", "val": ": int = 1"}, {"name": "shrink_interval_with_mask", "val": ": bool = False"}]</parameters><paramsdesc>- **num_latent_frames** (int) -- Total number of latent frames to generate
- **step_template** (torch.Tensor) -- Base timestep schedule (e.g., [1000, 800, 600, ..., 0])
- **base_num_latent_frames** (int) -- Maximum frames the model can process in one forward pass
- **ar_step** (int, optional) -- Autoregressive step size for temporal lag.
  0 = synchronous, >0 = asynchronous. Defaults to 5.
- **num_pre_ready** (int, optional) --
  Number of frames already denoised (e.g., from prefix in a video2video task).
  Defaults to 0.
- **causal_block_size** (int, optional) -- Number of frames processed as a causal block.
  Defaults to 1.
- **shrink_interval_with_mask** (bool, optional) -- Whether to optimize processing intervals.
  Defaults to False.</paramsdesc><paramgroups>0</paramgroups><rettype>tuple containing</rettype><retdesc>- step_matrix (torch.Tensor): Matrix of timesteps for each frame at each iteration Shape:
  [num_iterations, num_latent_frames]
- step_index (torch.Tensor): Index matrix for timestep lookup Shape: [num_iterations,
  num_latent_frames]
- step_update_mask (torch.Tensor): Boolean mask indicating which frames to update Shape:
  [num_iterations, num_latent_frames]
- valid_interval (list[tuple]): List of (start, end) intervals for each iteration</retdesc><raises>- ``ValueError`` -- If ar_step is too small for the given configuration</raises><raisederrors>``ValueError``</raisederrors></docstring>

This function implements the core diffusion forcing algorithm that creates a coordinated denoising schedule
across temporal frames. It supports both synchronous and asynchronous generation modes:

**Synchronous Mode** (ar_step=0, causal_block_size=1):
- All frames are denoised simultaneously at each timestep
- Each frame follows the same denoising trajectory: [1000, 800, 600, ..., 0]
- Simpler but may have less temporal consistency for long videos

**Asynchronous Mode** (ar_step>0, causal_block_size>1):
- Frames are grouped into causal blocks and processed block/chunk-wise
- Each block is denoised in a staggered pattern creating a "denoising wave"
- Earlier blocks are more denoised, later blocks lag behind by ar_step timesteps
- Creates stronger temporal dependencies and better consistency












</div></div>

## SkyReelsV2Pipeline[[diffusers.SkyReelsV2Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2Pipeline</name><anchor>diffusers.SkyReelsV2Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2.py#L107</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": SkyReelsV2Transformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": UniPCMultistepScheduler"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel)) --
  Conditional Transformer to denoise the input latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Text-to-Video (t2v) generation using SkyReels-V2.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SkyReelsV2Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2.py#L376</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 544"}, {"name": "width", "val": ": int = 960"}, {"name": "num_frames", "val": ": int = 97"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 6.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **height** (`int`, defaults to `544`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `960`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `97`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `6.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `SkyReelsV2PipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, *optional*, defaults to `512`) --
  The maximum sequence length for the text encoder.</paramsdesc><paramgroups>0</paramgroups><rettype>`~SkyReelsV2PipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `SkyReelsV2PipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SkyReelsV2Pipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import (
...     SkyReelsV2Pipeline,
...     UniPCMultistepScheduler,
...     AutoencoderKLWan,
... )
>>> from diffusers.utils import export_to_video

>>> # Load the pipeline
>>> # Available models:
>>> # - Skywork/SkyReels-V2-T2V-14B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-T2V-14B-720P-Diffusers
>>> vae = AutoencoderKLWan.from_pretrained(
...     "Skywork/SkyReels-V2-T2V-14B-720P-Diffusers",
...     subfolder="vae",
...     torch_dtype=torch.float32,
... )
>>> pipe = SkyReelsV2Pipeline.from_pretrained(
...     "Skywork/SkyReels-V2-T2V-14B-720P-Diffusers",
...     vae=vae,
...     torch_dtype=torch.bfloat16,
... )
>>> flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe = pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."

>>> output = pipe(
...     prompt=prompt,
...     num_inference_steps=50,
...     height=544,
...     width=960,
...     guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
...     num_frames=97,
... ).frames[0]
>>> export_to_video(output, "video.mp4", fps=24, quality=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SkyReelsV2Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2.py#L197</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SkyReelsV2ImageToVideoPipeline[[diffusers.SkyReelsV2ImageToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2ImageToVideoPipeline</name><anchor>diffusers.SkyReelsV2ImageToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_i2v.py#L127</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "image_processor", "val": ": CLIPProcessor"}, {"name": "transformer", "val": ": SkyReelsV2Transformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": UniPCMultistepScheduler"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **image_encoder** (`CLIPVisionModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection),
  specifically the
  [clip-vit-huge-patch14](https://github.com/mlfoundations/open_clip/blob/main/docs/PRETRAINED.md#vit-h14-xlm-roberta-large)
  variant.
- **transformer** ([SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel)) --
  Conditional Transformer to denoise the input latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Image-to-Video (i2v) generation using SkyReels-V2.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SkyReelsV2ImageToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_i2v.py#L476</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 544"}, {"name": "width", "val": ": int = 960"}, {"name": "num_frames", "val": ": int = 97"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "last_image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, defaults to `544`) --
  The height of the generated video.
- **width** (`int`, defaults to `960`) --
  The width of the generated video.
- **num_frames** (`int`, defaults to `97`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `negative_prompt` input argument.
- **image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings. Can be used to easily tweak image inputs (weighting). If not provided,
  image embeddings are generated from the `image` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `WanPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, *optional*, defaults to `512`) --
  The maximum sequence length of the prompt.</paramsdesc><paramgroups>0</paramgroups><rettype>`~SkyReelsV2PipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `SkyReelsV2PipelineOutput` is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SkyReelsV2ImageToVideoPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import (
...     SkyReelsV2ImageToVideoPipeline,
...     UniPCMultistepScheduler,
...     AutoencoderKLWan,
... )
>>> from diffusers.utils import export_to_video
>>> from PIL import Image

>>> # Load the pipeline
>>> # Available models:
>>> # - Skywork/SkyReels-V2-I2V-1.3B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-I2V-14B-540P-Diffusers
>>> # - Skywork/SkyReels-V2-I2V-14B-720P-Diffusers
>>> vae = AutoencoderKLWan.from_pretrained(
...     "Skywork/SkyReels-V2-I2V-14B-720P-Diffusers",
...     subfolder="vae",
...     torch_dtype=torch.float32,
... )
>>> pipe = SkyReelsV2ImageToVideoPipeline.from_pretrained(
...     "Skywork/SkyReels-V2-I2V-14B-720P-Diffusers",
...     vae=vae,
...     torch_dtype=torch.bfloat16,
... )
>>> flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe = pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."
>>> image = Image.open("path/to/image.png")

>>> output = pipe(
...     image=image,
...     prompt=prompt,
...     num_inference_steps=50,
...     height=544,
...     width=960,
...     guidance_scale=5.0,  # 6.0 for T2V, 5.0 for I2V
...     num_frames=97,
... ).frames[0]
>>> export_to_video(output, "video.mp4", fps=24, quality=8)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SkyReelsV2ImageToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_skyreels_v2_i2v.py#L238</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SkyReelsV2PipelineOutput[[diffusers.pipelines.skyreels_v2.pipeline_output.SkyReelsV2PipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.skyreels_v2.pipeline_output.SkyReelsV2PipelineOutput</name><anchor>diffusers.pipelines.skyreels_v2.pipeline_output.SkyReelsV2PipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/skyreels_v2/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for SkyReelsV2 pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/skyreels_v2.md" />

### Hidream
https://huggingface.co/docs/diffusers/main/api/pipelines/hidream.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# HiDreamImage

[HiDream-I1](https://huggingface.co/HiDream-ai) by HiDream.ai

> [!TIP]
> [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.

## Available models

The following models are available for the [HiDreamImagePipeline](/docs/diffusers/main/en/api/pipelines/hidream#diffusers.HiDreamImagePipeline) pipeline:

| Model name | Description |
|:---|:---|
| [`HiDream-ai/HiDream-I1-Full`](https://huggingface.co/HiDream-ai/HiDream-I1-Full) | - |
| [`HiDream-ai/HiDream-I1-Dev`](https://huggingface.co/HiDream-ai/HiDream-I1-Dev) | - |
| [`HiDream-ai/HiDream-I1-Fast`](https://huggingface.co/HiDream-ai/HiDream-I1-Fast) | - |

## HiDreamImagePipeline[[diffusers.HiDreamImagePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HiDreamImagePipeline</name><anchor>diffusers.HiDreamImagePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L160</source><parameters>[{"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5Tokenizer"}, {"name": "text_encoder_4", "val": ": LlamaForCausalLM"}, {"name": "tokenizer_4", "val": ": PreTrainedTokenizerFast"}, {"name": "transformer", "val": ": HiDreamImageTransformer2DModel"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.HiDreamImagePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L728</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_4", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_4", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds_t5", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds_llama3", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds_t5", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds_llama3", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 128"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead.
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead.
- **prompt_4** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_4` and `text_encoder_4`. If not defined, `prompt` is
  will be used instead.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 3.5) --
  Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
  a model to generate images more aligned with `prompt` at the expense of lower image quality.

  Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
  the [paper](https://huggingface.co/papers/2210.03142) to learn more.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is
  not greater than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_4** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_4` and
  `text_encoder_4`. If not defined, `negative_prompt` is used in all the text-encoders.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 128) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.hidream_image.HiDreamImagePipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.hidream_image.HiDreamImagePipelineOutput` if `return_dict` is True, otherwise a `tuple`. When
returning a tuple, the first element is a list with the generated. images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.HiDreamImagePipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from transformers import AutoTokenizer, LlamaForCausalLM
>>> from diffusers import HiDreamImagePipeline


>>> tokenizer_4 = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
>>> text_encoder_4 = LlamaForCausalLM.from_pretrained(
...     "meta-llama/Meta-Llama-3.1-8B-Instruct",
...     output_hidden_states=True,
...     output_attentions=True,
...     torch_dtype=torch.bfloat16,
... )

>>> pipe = HiDreamImagePipeline.from_pretrained(
...     "HiDream-ai/HiDream-I1-Full",
...     tokenizer_4=tokenizer_4,
...     text_encoder_4=text_encoder_4,
...     torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()

>>> image = pipe(
...     'A cat holding a sign that says "Hi-Dreams.ai".',
...     height=1024,
...     width=1024,
...     guidance_scale=5.0,
...     num_inference_steps=50,
...     generator=torch.Generator("cuda").manual_seed(0),
... ).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.HiDreamImagePipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L533</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.HiDreamImagePipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L560</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.HiDreamImagePipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L520</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.HiDreamImagePipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_hidream_image.py#L546</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div></div>

## HiDreamImagePipelineOutput[[diffusers.pipelines.hidream_image.pipeline_output.HiDreamImagePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.hidream_image.pipeline_output.HiDreamImagePipelineOutput</name><anchor>diffusers.pipelines.hidream_image.pipeline_output.HiDreamImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/hidream_image/pipeline_output.py#L25</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for HiDreamImage pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/hidream.md" />

### Sana
https://huggingface.co/docs/diffusers/main/api/pipelines/sana.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# SanaPipeline

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

[SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers](https://huggingface.co/papers/2410.10629) from NVIDIA and MIT HAN Lab, by Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, Song Han.

The abstract from the paper is:

*We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. Code and model will be publicly released.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [lawrence-cj](https://github.com/lawrence-cj) and [chenjy2003](https://github.com/chenjy2003). The original codebase can be found [here](https://github.com/NVlabs/Sana). The original weights can be found under [hf.co/Efficient-Large-Model](https://huggingface.co/Efficient-Large-Model).

Available models:

| Model | Recommended dtype |
|:-----:|:-----------------:|
| [`Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers) | `torch.bfloat16` |
| [`Efficient-Large-Model/Sana_1600M_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_diffusers) | `torch.float16` |
| [`Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_1024px_MultiLing_diffusers) | `torch.float16` |
| [`Efficient-Large-Model/Sana_1600M_512px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_diffusers) | `torch.float16` |
| [`Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_1600M_512px_MultiLing_diffusers) | `torch.float16` |
| [`Efficient-Large-Model/Sana_600M_1024px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_600M_1024px_diffusers) | `torch.float16` |
| [`Efficient-Large-Model/Sana_600M_512px_diffusers`](https://huggingface.co/Efficient-Large-Model/Sana_600M_512px_diffusers) | `torch.float16` |

Refer to [this](https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e) collection for more information.

Note: The recommended dtype mentioned is for the transformer weights. The text encoder and VAE weights must stay in `torch.bfloat16` or `torch.float32` for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype. 

> [!TIP]
> Make sure to pass the `variant` argument for downloaded checkpoints to use lower disk space. Set it to `"fp16"` for models with recommended dtype as `torch.float16`, and `"bf16"` for models with recommended dtype as `torch.bfloat16`. By default, `torch.float32` weights are downloaded, which use twice the amount of disk storage. Additionally, `torch.float32` weights can be downcasted on-the-fly by specifying the `torch_dtype` argument. Read about it in the [docs](https://huggingface.co/docs/diffusers/v0.31.0/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [SanaPipeline](/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = AutoModel.from_pretrained(
    "Efficient-Large-Model/Sana_1600M_1024px_diffusers",
    subfolder="text_encoder",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SanaTransformer2DModel.from_pretrained(
    "Efficient-Large-Model/Sana_1600M_1024px_diffusers",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = SanaPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_1600M_1024px_diffusers",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt).images[0]
image.save("sana.png")
```

## SanaPipeline[[diffusers.SanaPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaPipeline</name><anchor>diffusers.SanaPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L190</source><parameters>[{"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "vae", "val": ": AutoencoderDC"}, {"name": "transformer", "val": ": SanaTransformer2DModel"}, {"name": "scheduler", "val": ": DPMSolverMultistepScheduler"}]</parameters></docstring>

Pipeline for text-to-image generation using [Sana](https://huggingface.co/papers/2410.10629).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SanaPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L727</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.List[str] = [\"Given a user prompt, generate an 'Enhanced prompt' that provides detailed visual descriptions suitable for image generation. Evaluate the level of detail in the user prompt:\", '- If the prompt is simple, focus on adding specifics about colors, shapes, sizes, textures, and spatial relationships to create vivid and concrete scenes.', '- If the prompt is already detailed, refine and enhance the existing details slightly without overcomplicating.', 'Here are examples of how to transform or refine prompts:', '- User Prompt: A cat sleeping -> Enhanced: A small, fluffy white cat curled up in a round shape, sleeping peacefully on a warm sunny windowsill, surrounded by pots of blooming red flowers.', '- User Prompt: A busy city street -> Enhanced: A bustling city street scene at dusk, featuring glowing street lamps, a diverse crowd of people in colorful clothing, and a double-decker bus passing by towering glass skyscrapers.', 'Please generate only the enhanced description for the prompt below and avoid including any additional commentary or evaluations:', 'User Prompt: ']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to `300`) --
  Maximum sequence length to use with the `prompt`.
- **complex_human_instruction** (`List[str]`, *optional*) --
  Instructions for complex human attention:
  https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55.</paramsdesc><paramgroups>0</paramgroups><rettype>[SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [SanaPipelineOutput](/docs/diffusers/main/en/api/pipelines/controlnet_sana#diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SanaPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import SanaPipeline

>>> pipe = SanaPipeline.from_pretrained(
...     "Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.float32
... )
>>> pipe.to("cuda")
>>> pipe.text_encoder.to(torch.bfloat16)
>>> pipe.transformer = pipe.transformer.to(torch.bfloat16)

>>> image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
>>> image[0].save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.SanaPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L236</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.SanaPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L263</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.SanaPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L223</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.SanaPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L249</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SanaPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py#L334</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Sana, it's should be the embeddings of the "" string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.
- **complex_human_instruction** (`list[str]`, defaults to `complex_human_instruction`) --
  If `complex_human_instruction` is not empty, the function will use the complex Human instruction for
  the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SanaPAGPipeline[[diffusers.SanaPAGPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaPAGPipeline</name><anchor>diffusers.SanaPAGPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L148</source><parameters>[{"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "vae", "val": ": AutoencoderDC"}, {"name": "transformer", "val": ": SanaTransformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "pag_applied_layers", "val": ": typing.Union[str, typing.List[str]] = 'transformer_blocks.0'"}]</parameters></docstring>

Pipeline for text-to-image generation using [Sana](https://huggingface.co/papers/2410.10629). This pipeline
supports the use of [Perturbed Attention Guidance
(PAG)](https://huggingface.co/docs/diffusers/main/en/using-diffusers/pag).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.SanaPAGPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L648</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_inference_steps", "val": ": int = 20"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 4.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "height", "val": ": int = 1024"}, {"name": "width", "val": ": int = 1024"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "use_resolution_binning", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.List[str] = [\"Given a user prompt, generate an 'Enhanced prompt' that provides detailed visual descriptions suitable for image generation. Evaluate the level of detail in the user prompt:\", '- If the prompt is simple, focus on adding specifics about colors, shapes, sizes, textures, and spatial relationships to create vivid and concrete scenes.', '- If the prompt is already detailed, refine and enhance the existing details slightly without overcomplicating.', 'Here are examples of how to transform or refine prompts:', '- User Prompt: A cat sleeping -> Enhanced: A small, fluffy white cat curled up in a round shape, sleeping peacefully on a warm sunny windowsill, surrounded by pots of blooming red flowers.', '- User Prompt: A busy city street -> Enhanced: A bustling city street scene at dusk, featuring glowing street lamps, a diverse crowd of people in colorful clothing, and a double-decker bus passing by towering glass skyscrapers.', 'Please generate only the enhanced description for the prompt below and avoid including any additional commentary or evaluations:', 'User Prompt: ']"}, {"name": "pag_scale", "val": ": float = 3.0"}, {"name": "pag_adaptive_scale", "val": ": float = 0.0"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 20) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **clean_caption** (`bool`, *optional*, defaults to `True`) --
  Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
  be installed. If the dependencies are not installed, the embeddings will be created from the raw
  prompt.
- **use_resolution_binning** (`bool` defaults to `True`) --
  If set to `True`, the requested height and width are first mapped to the closest resolutions using
  `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
  the requested resolution. Useful for generating non-square images.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 300) -- Maximum sequence length to use with the `prompt`.
- **complex_human_instruction** (`List[str]`, *optional*) --
  Instructions for complex human attention:
  https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55.
- **pag_scale** (`float`, *optional*, defaults to 3.0) --
  The scale factor for the perturbed attention guidance. If it is set to 0.0, the perturbed attention
  guidance will not be used.
- **pag_adaptive_scale** (`float`, *optional*, defaults to 0.0) --
  The adaptive scale factor for the perturbed attention guidance. If it is set to 0.0, `pag_scale` is
  used.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.SanaPAGPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import SanaPAGPipeline

>>> pipe = SanaPAGPipeline.from_pretrained(
...     "Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers",
...     pag_applied_layers=["transformer_blocks.8"],
...     torch_dtype=torch.float32,
... )
>>> pipe.to("cuda")
>>> pipe.text_encoder.to(torch.bfloat16)
>>> pipe.transformer = pipe.transformer.to(torch.bfloat16)

>>> image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
>>> image[0].save("output.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.SanaPAGPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L202</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.SanaPAGPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L229</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.SanaPAGPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L189</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.SanaPAGPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L215</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.SanaPAGPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sana.py#L242</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": str = ''"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "clean_caption", "val": ": bool = False"}, {"name": "max_sequence_length", "val": ": int = 300"}, {"name": "complex_human_instruction", "val": ": typing.Optional[typing.List[str]] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  PixArt-Alpha, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Sana, it's should be the embeddings of the "" string.
- **clean_caption** (`bool`, defaults to `False`) --
  If `True`, the function will preprocess and clean the provided caption before encoding.
- **max_sequence_length** (`int`, defaults to 300) -- Maximum sequence length to use for the prompt.
- **complex_human_instruction** (`list[str]`, defaults to `complex_human_instruction`) --
  If `complex_human_instruction` is not empty, the function will use the complex Human instruction for
  the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## SanaPipelineOutput[[diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</name><anchor>diffusers.pipelines.sana.pipeline_output.SanaPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Sana pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/sana.md" />

### DDPM
https://huggingface.co/docs/diffusers/main/api/pipelines/ddpm.md

# DDPM

[Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2006.11239) (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the *discrete denoising scheduler* from the paper as well as the pipeline.

The abstract from the paper is:

*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*

The original codebase can be found at [hohonathanho/diffusion](https://github.com/hojonathanho/diffusion).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

# DDPMPipeline[[diffusers.DDPMPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DDPMPipeline</name><anchor>diffusers.DDPMPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py#L35</source><parameters>[{"name": "unet", "val": ": UNet2DModel"}, {"name": "scheduler", "val": ": DDPMScheduler"}]</parameters><paramsdesc>- **unet** ([UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel)) --
  A `UNet2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
  [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler), or [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image generation.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.DDPMPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py#L56</source><parameters>[{"name": "batch_size", "val": ": int = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "num_inference_steps", "val": ": int = 1000"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **batch_size** (`int`, *optional*, defaults to 1) --
  The number of images to generate.
- **generator** (`torch.Generator`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **num_inference_steps** (`int`, *optional*, defaults to 1000) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.DDPMPipeline.__call__.example">

Example:

```py
>>> from diffusers import DDPMPipeline

>>> # load model and scheduler
>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256")

>>> # run pipeline in inference (sample random noise and denoise)
>>> image = pipe().images[0]

>>> # save image
>>> image.save("ddpm_generated_image.png")
```

</ExampleCodeBlock>






</div></div>

## ImagePipelineOutput[[diffusers.ImagePipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ImagePipelineOutput</name><anchor>diffusers.ImagePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L118</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for image pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/ddpm.md" />

### Lumina2
https://huggingface.co/docs/diffusers/main/api/pipelines/lumina2.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

# Lumina2

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

[Lumina Image 2.0: A Unified and Efficient Image Generative Model](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0) is a 2 billion parameter flow-based diffusion transformer capable of generating diverse images from text descriptions.

The abstract from the paper is:

*We introduce Lumina-Image 2.0, an advanced text-to-image model that surpasses previous state-of-the-art methods across multiple benchmarks, while also shedding light on its potential to evolve into a generalist vision intelligence model. Lumina-Image 2.0 exhibits three key properties: (1) Unification – it adopts a unified architecture that treats text and image tokens as a joint sequence, enabling natural cross-modal interactions and facilitating task expansion. Besides, since high-quality captioners can provide semantically better-aligned text-image training pairs, we introduce a unified captioning system, UniCaptioner, which generates comprehensive and precise captions for the model. This not only accelerates model convergence but also enhances prompt adherence, variable-length prompt handling, and task generalization via prompt templates. (2) Efficiency – to improve the efficiency of the unified architecture, we develop a set of optimization techniques that improve semantic learning and fine-grained texture generation during training while incorporating inference-time acceleration strategies without compromising image quality. (3) Transparency – we open-source all training details, code, and models to ensure full reproducibility, aiming to bridge the gap between well-resourced closed-source research teams and independent developers.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## Using Single File loading with Lumina Image 2.0

Single file loading for Lumina Image 2.0 is available for the `Lumina2Transformer2DModel`

```python
import torch
from diffusers import Lumina2Transformer2DModel, Lumina2Pipeline

ckpt_path = "https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0/blob/main/consolidated.00-of-01.pth"
transformer = Lumina2Transformer2DModel.from_single_file(
    ckpt_path, torch_dtype=torch.bfloat16
)

pipe = Lumina2Pipeline.from_pretrained(
    "Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
image = pipe(
    "a cat holding a sign that says hello",
    generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("lumina-single-file.png")

```

## Using GGUF Quantized Checkpoints with Lumina Image 2.0

GGUF Quantized checkpoints for the `Lumina2Transformer2DModel` can be loaded via `from_single_file` with the `GGUFQuantizationConfig` 

```python
from diffusers import Lumina2Transformer2DModel, Lumina2Pipeline, GGUFQuantizationConfig 

ckpt_path = "https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q4_0.gguf"
transformer = Lumina2Transformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16,
)

pipe = Lumina2Pipeline.from_pretrained(
    "Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
image = pipe(
    "a cat holding a sign that says hello",
    generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("lumina-gguf.png")
```

## Lumina2Pipeline[[diffusers.Lumina2Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Lumina2Pipeline</name><anchor>diffusers.Lumina2Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L137</source><parameters>[{"name": "transformer", "val": ": Lumina2Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": Gemma2PreTrainedModel"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.gemma.tokenization_gemma.GemmaTokenizer, transformers.models.gemma.tokenization_gemma_fast.GemmaTokenizerFast]"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`Gemma2PreTrainedModel`) --
  Frozen Gemma2 text-encoder.
- **tokenizer** (`GemmaTokenizer` or `GemmaTokenizerFast`) --
  Gemma tokenizer.
- **transformer** ([Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel)) --
  A text conditioned `Transformer2DModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Lumina-T2I.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.Lumina2Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L524</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 30"}, {"name": "guidance_scale", "val": ": float = 4.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "system_prompt", "val": ": typing.Optional[str] = None"}, {"name": "cfg_trunc_ratio", "val": ": float = 1.0"}, {"name": "cfg_normalization", "val": ": bool = True"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **num_inference_steps** (`int`, *optional*, defaults to 30) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 4.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size) --
  The width in pixels of the generated image.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **prompt_attention_mask** (`torch.Tensor`, *optional*) -- Pre-generated attention mask for text embeddings.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Lumina-T2I this negative prompt should be "". If not
  provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.
- **negative_prompt_attention_mask** (`torch.Tensor`, *optional*) --
  Pre-generated attention mask for negative text embeddings.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.IFPipelineOutput` instead of a plain tuple.
- **attention_kwargs** --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **system_prompt** (`str`, *optional*) --
  The system prompt to use for the image generation.
- **cfg_trunc_ratio** (`float`, *optional*, defaults to `1.0`) --
  The ratio of the timestep interval to apply normalization-based guidance scale.
- **cfg_normalization** (`bool`, *optional*, defaults to `True`) --
  Whether to apply normalization-based guidance scale.
- **max_sequence_length** (`int`, defaults to `256`) --
  Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is
returned where the first element is a list with the generated images</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.Lumina2Pipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import Lumina2Pipeline

>>> pipe = Lumina2Pipeline.from_pretrained("Alpha-VLLM/Lumina-Image-2.0", torch_dtype=torch.bfloat16)
>>> # Enable memory optimizations.
>>> pipe.enable_model_cpu_offload()

>>> prompt = "Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.Lumina2Pipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L444</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.Lumina2Pipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L471</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.Lumina2Pipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L431</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.Lumina2Pipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L457</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.Lumina2Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L238</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "system_prompt", "val": ": typing.Optional[str] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For
  Lumina-T2I, this should be "".
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  whether to use classifier free guidance or not
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  number of images that should be generated per prompt
- **device** -- (`torch.device`, *optional*):
  torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. For Lumina-T2I, it's should be the embeddings of the "" string.
- **max_sequence_length** (`int`, defaults to `256`) --
  Maximum sequence length to use for the prompt.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/lumina2.md" />

### Wan
https://huggingface.co/docs/diffusers/main/api/pipelines/wan.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->

<div style="float: right;">
  <div class="flex flex-wrap space-x-1">
    <a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener">
      <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
    </a>
  </div>
</div>

# Wan

[Wan-2.1](https://huggingface.co/papers/2503.20314) by the Wan Team.

*This report presents Wan, a comprehensive and open suite of video foundation models designed to push the boundaries of video generation. Built upon the mainstream diffusion transformer paradigm, Wan achieves significant advancements in generative capabilities through a series of innovations, including our novel VAE, scalable pre-training strategies, large-scale data curation, and automated evaluation metrics. These contributions collectively enhance the model's performance and versatility. Specifically, Wan is characterized by four key features: Leading Performance: The 14B model of Wan, trained on a vast dataset comprising billions of images and videos, demonstrates the scaling laws of video generation with respect to both data and model size. It consistently outperforms the existing open-source models as well as state-of-the-art commercial solutions across multiple internal and external benchmarks, demonstrating a clear and significant performance superiority. Comprehensiveness: Wan offers two capable models, i.e., 1.3B and 14B parameters, for efficiency and effectiveness respectively. It also covers multiple downstream applications, including image-to-video, instruction-guided video editing, and personal video generation, encompassing up to eight tasks. Consumer-Grade Efficiency: The 1.3B model demonstrates exceptional resource efficiency, requiring only 8.19 GB VRAM, making it compatible with a wide range of consumer-grade GPUs. Openness: We open-source the entire series of Wan, including source code and all models, with the goal of fostering the growth of the video generation community. This openness seeks to significantly expand the creative possibilities of video production in the industry and provide academia with high-quality video foundation models. All the code and models are available at [this https URL](https://github.com/Wan-Video/Wan2.1).*

You can find all the original Wan2.1 checkpoints under the [Wan-AI](https://huggingface.co/Wan-AI) organization.

The following Wan models are supported in Diffusers:

- [Wan 2.1 T2V 1.3B](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers)
- [Wan 2.1 T2V 14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B-Diffusers)
- [Wan 2.1 I2V 14B - 480P](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P-Diffusers)
- [Wan 2.1 I2V 14B - 720P](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P-Diffusers)
- [Wan 2.1 FLF2V 14B - 720P](https://huggingface.co/Wan-AI/Wan2.1-FLF2V-14B-720P-diffusers)
- [Wan 2.1 VACE 1.3B](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B-diffusers)
- [Wan 2.1 VACE 14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B-diffusers)
- [Wan 2.2 T2V 14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers)
- [Wan 2.2 I2V 14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers)
- [Wan 2.2 TI2V 5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)

> [!TIP]
> Click on the Wan models in the right sidebar for more examples of video generation.

### Text-to-Video Generation

The example below demonstrates how to generate a video from text optimized for memory or inference speed.

<hfoptions id="T2V usage">
<hfoption id="T2V memory">

Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques.

The Wan2.1 text-to-video model below requires ~13GB of VRAM.

```py
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel

text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
apply_group_offloading(text_encoder,
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="block_level",
    num_blocks_per_group=4
)
transformer.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True
)

pipeline = WanPipeline.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers",
    vae=vae,
    transformer=transformer,
    text_encoder=text_encoder,
    torch_dtype=torch.bfloat16
)
pipeline.to("cuda")

prompt = """
The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, 
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, 
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""

output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

</hfoption>
<hfoption id="T2V inference speed">

[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster. [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.

```py
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel

text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

pipeline = WanPipeline.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers",
    vae=vae,
    transformer=transformer,
    text_encoder=text_encoder,
    torch_dtype=torch.bfloat16
)
pipeline.to("cuda")

# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
    pipeline.transformer, mode="max-autotune", fullgraph=True
)

prompt = """
The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, 
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, 
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""

output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

</hfoption>
</hfoptions>

### First-Last-Frame-to-Video Generation

The example below demonstrates how to use the image-to-video pipeline to generate a video using a text description, a starting frame, and an ending frame.

<hfoptions id="FLF2V usage">
<hfoption id="usage">

```python
import numpy as np
import torch
import torchvision.transforms.functional as TF
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel


model_id = "Wan-AI/Wan2.1-FLF2V-14B-720P-diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(
    model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16
)
pipe.to("cuda")

first_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png")
last_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png")

def aspect_ratio_resize(image, pipe, max_area=720 * 1280):
    aspect_ratio = image.height / image.width
    mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
    height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
    width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
    image = image.resize((width, height))
    return image, height, width

def center_crop_resize(image, height, width):
    # Calculate resize ratio to match first frame dimensions
    resize_ratio = max(width / image.width, height / image.height)

    # Resize the image
    width = round(image.width * resize_ratio)
    height = round(image.height * resize_ratio)
    size = [width, height]
    image = TF.center_crop(image, size)

    return image, height, width

first_frame, height, width = aspect_ratio_resize(first_frame, pipe)
if last_frame.size != first_frame.size:
    last_frame, _, _ = center_crop_resize(last_frame, height, width)

prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."

output = pipe(
    image=first_frame, last_image=last_frame, prompt=prompt, height=height, width=width, guidance_scale=5.5
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

</hfoption>
</hfoptions>

### Any-to-Video Controllable Generation

Wan VACE supports various generation techniques which achieve controllable video generation. Some of the capabilities include:
- Control to Video (Depth, Pose, Sketch, Flow, Grayscale, Scribble, Layout, Boundary Box, etc.). Recommended library for preprocessing videos to obtain control videos: [huggingface/controlnet_aux]()
- Image/Video to Video (first frame, last frame, starting clip, ending clip, random clips)
- Inpainting and Outpainting
- Subject to Video (faces, object, characters, etc.)
- Composition to Video (reference anything, animate anything, swap anything, expand anything, move anything, etc.)

The code snippets available in [this](https://github.com/huggingface/diffusers/pull/11582) pull request demonstrate some examples of how videos can be generated with controllability signals.

The general rule of thumb to keep in mind when preparing inputs for the VACE pipeline is that the input images, or frames of a video that you want to use for conditioning, should have a corresponding mask that is black in color. The black mask signifies that the model will not generate new content for that area, and only use those parts for conditioning the generation process. For parts/frames that should be generated by the model, the mask should be white in color.

## Notes

- Wan2.1 supports LoRAs with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.WanLoraLoaderMixin.load_lora_weights).

  <details>
  <summary>Show example code</summary>

  ```py
  # pip install ftfy
  import torch
  from diffusers import AutoModel, WanPipeline
  from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
  from diffusers.utils import export_to_video

  vae = AutoModel.from_pretrained(
      "Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32
  )
  pipeline = WanPipeline.from_pretrained(
      "Wan-AI/Wan2.1-T2V-1.3B-Diffusers", vae=vae, torch_dtype=torch.bfloat16
  )
  pipeline.scheduler = UniPCMultistepScheduler.from_config(
      pipeline.scheduler.config, flow_shift=5.0
  )
  pipeline.to("cuda")

  pipeline.load_lora_weights("benjamin-paine/steamboat-willie-1.3b", adapter_name="steamboat-willie")
  pipeline.set_adapters("steamboat-willie")

  pipeline.enable_model_cpu_offload()

  # use "steamboat willie style" to trigger the LoRA
  prompt = """
  steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot, 
  revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
  for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
  Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
  shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
  """

  output = pipeline(
      prompt=prompt,
      num_frames=81,
      guidance_scale=5.0,
  ).frames[0]
  export_to_video(output, "output.mp4", fps=16)
  ```

  </details>

- [WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel) and [AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan) supports loading from single files with [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file).

  <details>
  <summary>Show example code</summary>

  ```py
  # pip install ftfy
  import torch
  from diffusers import WanPipeline, WanTransformer3DModel, AutoencoderKLWan

  vae = AutoencoderKLWan.from_single_file(
      "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors"
  )
  transformer = WanTransformer3DModel.from_single_file(
      "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/diffusion_models/wan2.1_t2v_1.3B_bf16.safetensors",
      torch_dtype=torch.bfloat16
  )
  pipeline = WanPipeline.from_pretrained(
      "Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
      vae=vae,
      transformer=transformer,
      torch_dtype=torch.bfloat16
  )
  ```

  </details>

- Set the [AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan) dtype to `torch.float32` for better decoding quality.

- The number of frames per second (fps) or `k` should be calculated by `4 * k + 1`.

- Try lower `shift` values (`2.0` to `5.0`) for lower resolution videos and higher `shift` values (`7.0` to `12.0`) for higher resolution images.

- Wan 2.1 and 2.2 support using [LightX2V LoRAs](https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v) to speed up inference. Using them on Wan 2.2 is slightly more involed. Refer to [this code snippet](https://github.com/huggingface/diffusers/pull/12040#issuecomment-3144185272) to learn more.

- Wan 2.2 has two denoisers. By default, LoRAs are only loaded into the first denoiser. One can set `load_into_transformer_2=True` to load LoRAs into the second denoiser. Refer to [this](https://github.com/huggingface/diffusers/pull/12074#issue-3292620048) and [this](https://github.com/huggingface/diffusers/pull/12074#issuecomment-3155896144) examples to learn more.

## WanPipeline[[diffusers.WanPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WanPipeline</name><anchor>diffusers.WanPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan.py#L95</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "transformer", "val": ": typing.Optional[diffusers.models.transformers.transformer_wan.WanTransformer3DModel] = None"}, {"name": "transformer_2", "val": ": typing.Optional[diffusers.models.transformers.transformer_wan.WanTransformer3DModel] = None"}, {"name": "boundary_ratio", "val": ": typing.Optional[float] = None"}, {"name": "expand_timesteps", "val": ": bool = False"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel)) --
  Conditional Transformer to denoise the input latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **transformer_2** ([WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel), *optional*) --
  Conditional Transformer to denoise the input latents during the low-noise stage. If provided, enables
  two-stage denoising where `transformer` handles high-noise stages and `transformer_2` handles low-noise
  stages. If not provided, only `transformer` is used.
- **boundary_ratio** (`float`, *optional*, defaults to `None`) --
  Ratio of total timesteps to use as the boundary for switching between transformers in two-stage denoising.
  The actual boundary timestep is calculated as `boundary_ratio * num_train_timesteps`. When provided,
  `transformer` handles timesteps >= boundary_timestep and `transformer_2` handles timesteps <
  boundary_timestep. If `None`, only `transformer` is used for the entire denoising process.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-video generation using Wan.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WanPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan.py#L380</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 480"}, {"name": "width", "val": ": int = 832"}, {"name": "num_frames", "val": ": int = 81"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "guidance_scale_2", "val": ": typing.Optional[float] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, pass `prompt_embeds` instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to avoid during image generation. If not defined, pass `negative_prompt_embeds`
  instead. Ignored when not using guidance (`guidance_scale` < `1`).
- **height** (`int`, defaults to `480`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `832`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `81`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **guidance_scale_2** (`float`, *optional*, defaults to `None`) --
  Guidance scale for the low-noise stage transformer (`transformer_2`). If `None` and the pipeline's
  `boundary_ratio` is not None, uses the same value as `guidance_scale`. Only used when `transformer_2`
  and the pipeline's `boundary_ratio` are not None.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `WanPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `512`) --
  The maximum sequence length of the text encoder. If the prompt is longer than this, it will be
  truncated. If the prompt is shorter, it will be padded to this length.</paramsdesc><paramgroups>0</paramgroups><rettype>`~WanPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `WanPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WanPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers.utils import export_to_video
>>> from diffusers import AutoencoderKLWan, WanPipeline
>>> from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler

>>> # Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
>>> model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
>>> vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
>>> pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
>>> flow_shift = 5.0  # 5.0 for 720P, 3.0 for 480P
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe.to("cuda")

>>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."
>>> negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"

>>> output = pipe(
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     height=720,
...     width=1280,
...     num_frames=81,
...     guidance_scale=5.0,
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=16)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.WanPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan.py#L198</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## WanImageToVideoPipeline[[diffusers.WanImageToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WanImageToVideoPipeline</name><anchor>diffusers.WanImageToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L127</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "image_processor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModel = None"}, {"name": "transformer", "val": ": WanTransformer3DModel = None"}, {"name": "transformer_2", "val": ": WanTransformer3DModel = None"}, {"name": "boundary_ratio", "val": ": typing.Optional[float] = None"}, {"name": "expand_timesteps", "val": ": bool = False"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **image_encoder** (`CLIPVisionModel`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModel), specifically
  the
  [clip-vit-huge-patch14](https://github.com/mlfoundations/open_clip/blob/main/docs/PRETRAINED.md#vit-h14-xlm-roberta-large)
  variant.
- **transformer** ([WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel)) --
  Conditional Transformer to denoise the input latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **transformer_2** ([WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel), *optional*) --
  Conditional Transformer to denoise the input latents during the low-noise stage. In two-stage denoising,
  `transformer` handles high-noise stages and `transformer_2` handles low-noise stages. If not provided, only
  `transformer` is used.
- **boundary_ratio** (`float`, *optional*, defaults to `None`) --
  Ratio of total timesteps to use as the boundary for switching between transformers in two-stage denoising.
  The actual boundary timestep is calculated as `boundary_ratio * num_train_timesteps`. When provided,
  `transformer` handles timesteps >= boundary_timestep and `transformer_2` handles timesteps <
  boundary_timestep. If `None`, only `transformer` is used for the entire denoising process.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for image-to-video generation using Wan.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WanImageToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L507</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 480"}, {"name": "width", "val": ": int = 832"}, {"name": "num_frames", "val": ": int = 81"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "guidance_scale_2", "val": ": typing.Optional[float] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "last_image", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  The input image to condition the generation on. Must be an image, a list of images or a `torch.Tensor`.
- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **height** (`int`, defaults to `480`) --
  The height of the generated video.
- **width** (`int`, defaults to `832`) --
  The width of the generated video.
- **num_frames** (`int`, defaults to `81`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **guidance_scale_2** (`float`, *optional*, defaults to `None`) --
  Guidance scale for the low-noise stage transformer (`transformer_2`). If `None` and the pipeline's
  `boundary_ratio` is not None, uses the same value as `guidance_scale`. Only used when `transformer_2`
  and the pipeline's `boundary_ratio` are not None.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `negative_prompt` input argument.
- **image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings. Can be used to easily tweak image inputs (weighting). If not provided,
  image embeddings are generated from the `image` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `WanPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `512`) --
  The maximum sequence length of the text encoder. If the prompt is longer than this, it will be
  truncated. If the prompt is shorter, it will be padded to this length.</paramsdesc><paramgroups>0</paramgroups><rettype>`~WanPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `WanPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WanImageToVideoPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> import numpy as np
>>> from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
>>> from diffusers.utils import export_to_video, load_image
>>> from transformers import CLIPVisionModel

>>> # Available models: Wan-AI/Wan2.1-I2V-14B-480P-Diffusers, Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
>>> model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
>>> image_encoder = CLIPVisionModel.from_pretrained(
...     model_id, subfolder="image_encoder", torch_dtype=torch.float32
... )
>>> vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
>>> pipe = WanImageToVideoPipeline.from_pretrained(
...     model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16
... )
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"
... )
>>> max_area = 480 * 832
>>> aspect_ratio = image.height / image.width
>>> mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
>>> height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
>>> width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
>>> image = image.resize((width, height))
>>> prompt = (
...     "An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
...     "the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
... )
>>> negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"

>>> output = pipe(
...     image=image,
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     height=height,
...     width=width,
...     num_frames=81,
...     guidance_scale=5.0,
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=16)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.WanImageToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L251</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## WanVACEPipeline[[diffusers.WanVACEPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WanVACEPipeline</name><anchor>diffusers.WanVACEPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_vace.py#L141</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": WanVACETransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "transformer_2", "val": ": WanVACETransformer3DModel = None"}, {"name": "boundary_ratio", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** (`WanVACETransformer3DModel`) --
  Conditional Transformer to denoise the input latents.
- **transformer_2** (`WanVACETransformer3DModel`, *optional*) --
  Conditional Transformer to denoise the input latents during the low-noise stage. In two-stage denoising,
  `transformer` handles high-noise stages and `transformer_2` handles low-noise stages. If not provided, only
  `transformer` is used.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.
- **boundary_ratio** (`float`, *optional*, defaults to `None`) --
  Ratio of total timesteps to use as the boundary for switching between transformers in two-stage denoising.
  The actual boundary timestep is calculated as `boundary_ratio * num_train_timesteps`. When provided,
  `transformer` handles timesteps >= boundary_timestep and `transformer_2` handles timesteps <
  boundary_timestep. If `None`, only `transformer` is used for the entire denoising process.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for controllable generation using Wan.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WanVACEPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_vace.py#L671</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "video", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "mask", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "reference_images", "val": ": typing.Optional[typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "conditioning_scale", "val": ": typing.Union[float, typing.List[float], torch.Tensor] = 1.0"}, {"name": "height", "val": ": int = 480"}, {"name": "width", "val": ": int = 832"}, {"name": "num_frames", "val": ": int = 81"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "guidance_scale_2", "val": ": typing.Optional[float] = None"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`
  instead.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **video** (`List[PIL.Image.Image]`, *optional*) --
  The input video or videos to be used as a starting point for the generation. The video should be a list
  of PIL images, a numpy array, or a torch tensor. Currently, the pipeline only supports generating one
  video at a time.
- **mask** (`List[PIL.Image.Image]`, *optional*) --
  The input mask defines which video regions to condition on and which to generate. Black areas in the
  mask indicate conditioning regions, while white areas indicate regions for generation. The mask should
  be a list of PIL images, a numpy array, or a torch tensor. Currently supports generating a single video
  at a time.
- **reference_images** (`List[PIL.Image.Image]`, *optional*) --
  A list of one or more reference images as extra conditioning for the generation. For example, if you
  are trying to inpaint a video to change the character, you can pass reference images of the new
  character here. Refer to the Diffusers [examples](https://github.com/huggingface/diffusers/pull/11582)
  and original [user
  guide](https://github.com/ali-vilab/VACE/blob/0897c6d055d7d9ea9e191dce763006664d9780f8/UserGuide.md)
  for a full list of supported tasks and use cases.
- **conditioning_scale** (`float`, `List[float]`, `torch.Tensor`, defaults to `1.0`) --
  The conditioning scale to be applied when adding the control conditioning latent stream to the
  denoising latent stream in each control layer of the model. If a float is provided, it will be applied
  uniformly to all layers. If a list or tensor is provided, it should have the same length as the number
  of control layers in the model (`len(transformer.config.vace_layers)`).
- **height** (`int`, defaults to `480`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `832`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `81`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
  `guidance_scale` is defined as `w` of equation 2. of [Imagen
  Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
  1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
  usually at the expense of lower image quality.
- **guidance_scale_2** (`float`, *optional*, defaults to `None`) --
  Guidance scale for the low-noise stage transformer (`transformer_2`). If `None` and the pipeline's
  `boundary_ratio` is not None, uses the same value as `guidance_scale`. Only used when `transformer_2`
  and the pipeline's `boundary_ratio` are not None.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `WanPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `512`) --
  The maximum sequence length of the text encoder. If the prompt is longer than this, it will be
  truncated. If the prompt is shorter, it will be padded to this length.</paramsdesc><paramgroups>0</paramgroups><rettype>`~WanPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `WanPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WanVACEPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> import PIL.Image
>>> from diffusers import AutoencoderKLWan, WanVACEPipeline
>>> from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
>>> from diffusers.utils import export_to_video, load_image
def prepare_video_and_mask(first_img: PIL.Image.Image, last_img: PIL.Image.Image, height: int, width: int, num_frames: int):
    first_img = first_img.resize((width, height))
    last_img = last_img.resize((width, height))
    frames = []
    frames.append(first_img)
    # Ideally, this should be 127.5 to match original code, but they perform computation on numpy arrays
    # whereas we are passing PIL images. If you choose to pass numpy arrays, you can set it to 127.5 to
    # match the original code.
    frames.extend([PIL.Image.new("RGB", (width, height), (128, 128, 128))] * (num_frames - 2))
    frames.append(last_img)
    mask_black = PIL.Image.new("L", (width, height), 0)
    mask_white = PIL.Image.new("L", (width, height), 255)
    mask = [mask_black, *[mask_white] * (num_frames - 2), mask_black]
    return frames, mask

>>> # Available checkpoints: Wan-AI/Wan2.1-VACE-1.3B-diffusers, Wan-AI/Wan2.1-VACE-14B-diffusers
>>> model_id = "Wan-AI/Wan2.1-VACE-1.3B-diffusers"
>>> vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
>>> pipe = WanVACEPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
>>> flow_shift = 3.0  # 5.0 for 720P, 3.0 for 480P
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe.to("cuda")

>>> prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."
>>> negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
>>> first_frame = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png"
... )
>>> last_frame = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png>>> "
... )

>>> height = 512
>>> width = 512
>>> num_frames = 81
>>> video, mask = prepare_video_and_mask(first_frame, last_frame, height, width, num_frames)

>>> output = pipe(
...     video=video,
...     mask=mask,
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     height=height,
...     width=width,
...     num_frames=num_frames,
...     num_inference_steps=30,
...     guidance_scale=5.0,
...     generator=torch.Generator().manual_seed(42),
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=16)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.WanVACEPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_vace.py#L244</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## WanVideoToVideoPipeline[[diffusers.WanVideoToVideoPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WanVideoToVideoPipeline</name><anchor>diffusers.WanVideoToVideoPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_video2video.py#L174</source><parameters>[{"name": "tokenizer", "val": ": AutoTokenizer"}, {"name": "text_encoder", "val": ": UMT5EncoderModel"}, {"name": "transformer", "val": ": WanTransformer3DModel"}, {"name": "vae", "val": ": AutoencoderKLWan"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}]</parameters><paramsdesc>- **tokenizer** (`T5Tokenizer`) --
  Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer),
  specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **text_encoder** (`T5EncoderModel`) --
  [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically
  the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant.
- **transformer** ([WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel)) --
  Conditional Transformer to denoise the input latents.
- **scheduler** ([UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan)) --
  Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for video-to-video generation using Wan.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.WanVideoToVideoPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_video2video.py#L479</source><parameters>[{"name": "video", "val": ": typing.List[PIL.Image.Image] = None"}, {"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": int = 480"}, {"name": "width", "val": ": int = 832"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_videos_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'np'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 512"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`
  instead.
- **height** (`int`, defaults to `480`) --
  The height in pixels of the generated image.
- **width** (`int`, defaults to `832`) --
  The width in pixels of the generated image.
- **num_frames** (`int`, defaults to `81`) --
  The number of frames in the generated video.
- **num_inference_steps** (`int`, defaults to `50`) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, defaults to `5.0`) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **strength** (`float`, defaults to `0.8`) --
  Higher strength leads to more differences between original image and generated video.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"np"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `WanPipelineOutput` instead of a plain tuple.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int`, defaults to `512`) --
  The maximum sequence length of the text encoder. If the prompt is longer than this, it will be
  truncated. If the prompt is shorter, it will be padded to this length.</paramsdesc><paramgroups>0</paramgroups><rettype>`~WanPipelineOutput` or `tuple`</rettype><retdesc>If `return_dict` is `True`, `WanPipelineOutput` is returned, otherwise a `tuple` is returned where
the first element is a list with the generated images and the second element is a list of `bool`s
indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.WanVideoToVideoPipeline.__call__.example">

Examples:
```python
>>> import torch
>>> from diffusers.utils import export_to_video, load_video
>>> from diffusers import AutoencoderKLWan, WanVideoToVideoPipeline
>>> from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler

>>> # Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
>>> model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers"
>>> vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
>>> pipe = WanVideoToVideoPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
>>> flow_shift = 3.0  # 5.0 for 720P, 3.0 for 480P
>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
>>> pipe.to("cuda")

>>> prompt = "A robot standing on a mountain top. The sun is setting in the background"
>>> negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
>>> video = load_video(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
... )
>>> output = pipe(
...     video=video,
...     prompt=prompt,
...     negative_prompt=negative_prompt,
...     height=480,
...     width=720,
...     guidance_scale=5.0,
...     strength=0.7,
... ).frames[0]
>>> export_to_video(output, "output.mp4", fps=16)
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.WanVideoToVideoPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_video2video.py#L264</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "num_videos_per_prompt", "val": ": int = 1"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "max_sequence_length", "val": ": int = 226"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **do_classifier_free_guidance** (`bool`, *optional*, defaults to `True`) --
  Whether to use classifier free guidance or not.
- **num_videos_per_prompt** (`int`, *optional*, defaults to 1) --
  Number of videos that should be generated per prompt. torch device to place the resulting embeddings on
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **device** -- (`torch.device`, *optional*):
  torch device
- **dtype** -- (`torch.dtype`, *optional*):
  torch dtype</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## WanPipelineOutput[[diffusers.pipelines.wan.pipeline_output.WanPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.wan.pipeline_output.WanPipelineOutput</name><anchor>diffusers.pipelines.wan.pipeline_output.WanPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_output.py#L9</source><parameters>[{"name": "frames", "val": ": Tensor"}]</parameters><paramsdesc>- **frames** (`torch.Tensor`, `np.ndarray`, or List[List[PIL.Image.Image]]) --
  List of video outputs - It can be a nested list of length `batch_size,` with each sub-list containing
  denoised PIL image sequences of length `num_frames.` It can also be a NumPy array or Torch tensor of shape
  `(batch_size, num_frames, channels, height, width)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Wan pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/wan.md" />

### BLIP-Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/blip_diffusion.md

# BLIP-Diffusion

BLIP-Diffusion was proposed in [BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing](https://huggingface.co/papers/2305.14720). It enables zero-shot subject-driven generation and control-guided zero-shot generation.


The abstract from the paper is:

*Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at [this https URL](https://dxli94.github.io/BLIP-Diffusion-website/).*

The original codebase can be found at [salesforce/LAVIS](https://github.com/salesforce/LAVIS/tree/main/projects/blip-diffusion). You can find the official BLIP-Diffusion checkpoints under the [hf.co/SalesForce](https://hf.co/SalesForce) organization.

`BlipDiffusionPipeline` and `BlipDiffusionControlNetPipeline` were contributed by [`ayushtues`](https://github.com/ayushtues/).

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.


## BlipDiffusionPipeline[[diffusers.BlipDiffusionPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.BlipDiffusionPipeline</name><anchor>diffusers.BlipDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/blip_diffusion/pipeline_blip_diffusion.py#L80</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": ContextCLIPTextModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": PNDMScheduler"}, {"name": "qformer", "val": ": Blip2QFormerModel"}, {"name": "image_processor", "val": ": BlipImageProcessor"}, {"name": "ctx_begin_pos", "val": ": int = 2"}, {"name": "mean", "val": ": typing.List[float] = None"}, {"name": "std", "val": ": typing.List[float] = None"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer for the text encoder
- **text_encoder** (`ContextCLIPTextModel`) --
  Text encoder to encode the text prompt
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  VAE model to map the latents to the image
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **scheduler** ([PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **qformer** (`Blip2QFormerModel`) --
  QFormer model to get multi-modal embeddings from the text and image.
- **image_processor** (`BlipImageProcessor`) --
  Image Processor to preprocess and postprocess the image.
- **ctx_begin_pos** (int, `optional`, defaults to 2) --
  Position of the context token in the text encoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.BlipDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/blip_diffusion/pipeline_blip_diffusion.py#L192</source><parameters>[{"name": "prompt", "val": ": typing.List[str]"}, {"name": "reference_image", "val": ": Image"}, {"name": "source_subject_category", "val": ": typing.List[str]"}, {"name": "target_subject_category", "val": ": typing.List[str]"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "neg_prompt", "val": ": typing.Optional[str] = ''"}, {"name": "prompt_strength", "val": ": float = 1.0"}, {"name": "prompt_reps", "val": ": int = 20"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`List[str]`) --
  The prompt or prompts to guide the image generation.
- **reference_image** (`PIL.Image.Image`) --
  The reference image to condition the generation on.
- **source_subject_category** (`List[str]`) --
  The source subject category.
- **target_subject_category** (`List[str]`) --
  The target subject category.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by random sampling.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **height** (`int`, *optional*, defaults to 512) --
  The height of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **neg_prompt** (`str`, *optional*, defaults to "") --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **prompt_strength** (`float`, *optional*, defaults to 1.0) --
  The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps
  to amplify the prompt.
- **prompt_reps** (`int`, *optional*, defaults to 20) --
  The number of times the prompt is repeated along with prompt_strength to amplify the prompt.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
  (`np.array`) or `"pt"` (`torch.Tensor`).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.BlipDiffusionPipeline.__call__.example">

Examples:
```py
>>> from diffusers.pipelines import BlipDiffusionPipeline
>>> from diffusers.utils import load_image
>>> import torch

>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained(
...     "Salesforce/blipdiffusion", torch_dtype=torch.float16
... ).to("cuda")


>>> cond_subject = "dog"
>>> tgt_subject = "dog"
>>> text_prompt_input = "swimming underwater"

>>> cond_image = load_image(
...     "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg"
... )
>>> guidance_scale = 7.5
>>> num_inference_steps = 25
>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"


>>> output = blip_diffusion_pipe(
...     text_prompt_input,
...     cond_image,
...     cond_subject,
...     tgt_subject,
...     guidance_scale=guidance_scale,
...     num_inference_steps=num_inference_steps,
...     neg_prompt=negative_prompt,
...     height=512,
...     width=512,
... ).images
>>> output[0].save("image.png")
```

</ExampleCodeBlock>







</div></div>

## BlipDiffusionControlNetPipeline[[diffusers.BlipDiffusionControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.BlipDiffusionControlNetPipeline</name><anchor>diffusers.BlipDiffusionControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_blip_diffusion.py#L87</source><parameters>[{"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder", "val": ": ContextCLIPTextModel"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": PNDMScheduler"}, {"name": "qformer", "val": ": Blip2QFormerModel"}, {"name": "controlnet", "val": ": ControlNetModel"}, {"name": "image_processor", "val": ": BlipImageProcessor"}, {"name": "ctx_begin_pos", "val": ": int = 2"}, {"name": "mean", "val": ": typing.List[float] = None"}, {"name": "std", "val": ": typing.List[float] = None"}]</parameters><paramsdesc>- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer for the text encoder
- **text_encoder** (`ContextCLIPTextModel`) --
  Text encoder to encode the text prompt
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  VAE model to map the latents to the image
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  Conditional U-Net architecture to denoise the image embedding.
- **scheduler** ([PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler)) --
  A scheduler to be used in combination with `unet` to generate image latents.
- **qformer** (`Blip2QFormerModel`) --
  QFormer model to get multi-modal embeddings from the text and image.
- **controlnet** ([ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel)) --
  ControlNet model to get the conditioning image embedding.
- **image_processor** (`BlipImageProcessor`) --
  Image Processor to preprocess and postprocess the image.
- **ctx_begin_pos** (int, `optional`, defaults to 2) --
  Position of the context token in the text encoder.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.BlipDiffusionControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet/pipeline_controlnet_blip_diffusion.py#L240</source><parameters>[{"name": "prompt", "val": ": typing.List[str]"}, {"name": "reference_image", "val": ": Image"}, {"name": "condtioning_image", "val": ": Image"}, {"name": "source_subject_category", "val": ": typing.List[str]"}, {"name": "target_subject_category", "val": ": typing.List[str]"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "height", "val": ": int = 512"}, {"name": "width", "val": ": int = 512"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "neg_prompt", "val": ": typing.Optional[str] = ''"}, {"name": "prompt_strength", "val": ": float = 1.0"}, {"name": "prompt_reps", "val": ": int = 20"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **prompt** (`List[str]`) --
  The prompt or prompts to guide the image generation.
- **reference_image** (`PIL.Image.Image`) --
  The reference image to condition the generation on.
- **condtioning_image** (`PIL.Image.Image`) --
  The conditioning canny edge image to condition the generation on.
- **source_subject_category** (`List[str]`) --
  The source subject category.
- **target_subject_category** (`List[str]`) --
  The target subject category.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by random sampling.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **height** (`int`, *optional*, defaults to 512) --
  The height of the generated image.
- **width** (`int`, *optional*, defaults to 512) --
  The width of the generated image.
- **seed** (`int`, *optional*, defaults to 42) --
  The seed to use for random generation.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **neg_prompt** (`str`, *optional*, defaults to "") --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **prompt_strength** (`float`, *optional*, defaults to 1.0) --
  The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps
  to amplify the prompt.
- **prompt_reps** (`int`, *optional*, defaults to 20) --
  The number of times the prompt is repeated along with prompt_strength to amplify the prompt.</paramsdesc><paramgroups>0</paramgroups><rettype>[ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`</rettype></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.BlipDiffusionControlNetPipeline.__call__.example">

Examples:
```py
>>> from diffusers.pipelines import BlipDiffusionControlNetPipeline
>>> from diffusers.utils import load_image
>>> from controlnet_aux import CannyDetector
>>> import torch

>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained(
...     "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16
... ).to("cuda")

>>> style_subject = "flower"
>>> tgt_subject = "teapot"
>>> text_prompt = "on a marble table"

>>> cldm_cond_image = load_image(
...     "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg"
... ).resize((512, 512))
>>> canny = CannyDetector()
>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil")
>>> style_image = load_image(
...     "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg"
... )
>>> guidance_scale = 7.5
>>> num_inference_steps = 50
>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"


>>> output = blip_diffusion_pipe(
...     text_prompt,
...     style_image,
...     cldm_cond_image,
...     style_subject,
...     tgt_subject,
...     guidance_scale=guidance_scale,
...     num_inference_steps=num_inference_steps,
...     neg_prompt=negative_prompt,
...     height=512,
...     width=512,
... ).images
>>> output[0].save("image.png")
```

</ExampleCodeBlock>







</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/blip_diffusion.md" />

### Omnigen
https://huggingface.co/docs/diffusers/main/api/pipelines/omnigen.md

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->

# OmniGen

[OmniGen: Unified Image Generation](https://huggingface.co/papers/2409.11340) from BAAI, by Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, Zheng Liu.

The abstract from the paper is:

*The emergence of Large Language Models (LLMs) has unified language  generation tasks and revolutionized human-machine interaction.  However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism.  This work represents the first attempt at a general-purpose image generation model,  and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*

> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1).

## Inference

First, load the pipeline:

```python
import torch
from diffusers import OmniGenPipeline

pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
pipe.to("cuda")
```

For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image. 
You can try setting the `height` and `width` parameters to generate images with different size.

```python
prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
image = pipe(
    prompt=prompt,
    height=1024,
    width=1024,
    guidance_scale=3,
    generator=torch.Generator(device="cpu").manual_seed(111),
).images[0]
image.save("output.png")
```

OmniGen supports multimodal inputs. 
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image. 
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.

```python
prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(222)).images[0]
image.save("output.png")
```

## OmniGenPipeline[[diffusers.OmniGenPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.OmniGenPipeline</name><anchor>diffusers.OmniGenPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L119</source><parameters>[{"name": "transformer", "val": ": OmniGenTransformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "tokenizer", "val": ": LlamaTokenizer"}]</parameters><paramsdesc>- **transformer** ([OmniGenTransformer2DModel](/docs/diffusers/main/en/api/models/omnigen_transformer#diffusers.OmniGenTransformer2DModel)) --
  Autoregressive Transformer architecture for OmniGen.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **tokenizer** (`LlamaTokenizer`) --
  Text tokenizer of class.
  [LlamaTokenizer](https://huggingface.co/docs/transformers/main/model_doc/llama#transformers.LlamaTokenizer).</paramsdesc><paramgroups>0</paramgroups></docstring>

The OmniGen pipeline for multimodal-to-image generation.

Reference: https://huggingface.co/papers/2409.11340





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.OmniGenPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L330</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "input_images", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], typing.List[typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "max_input_image_size", "val": ": int = 1024"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "guidance_scale", "val": ": float = 2.5"}, {"name": "img_guidance_scale", "val": ": float = 1.6"}, {"name": "use_input_image_size_as_output", "val": ": bool = False"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If the input includes images, need to add
  placeholders `<img><|image_i|></img>` in the prompt to indicate the position of the i-th images.
- **input_images** (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*) --
  The list of input images. We will replace the "<|image_i|>" in prompt with the i-th image in list.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **max_input_image_size** (`int`, *optional*, defaults to 1024) --
  the maximum size of input image, which will be used to crop the input image to the maximum size
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **guidance_scale** (`float`, *optional*, defaults to 2.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **img_guidance_scale** (`float`, *optional*, defaults to 1.6) --
  Defined as equation 3 in [Instrucpix2pix](https://huggingface.co/papers/2211.09800).
- **use_input_image_size_as_output** (bool, defaults to False) --
  whether to use the input image size as the output image size, which can be used for single-image input,
  e.g., image editing task
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.flux.FluxPipelineOutput` instead of a plain tuple.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.OmniGenPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import OmniGenPipeline

>>> pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")

>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=50, guidance_scale=2.5).images[0]
>>> image.save("output.png")
```

</ExampleCodeBlock>


Returns: [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) or `tuple`:
If `return_dict` is `True`, [ImagePipelineOutput](/docs/diffusers/main/en/api/pipelines/dit#diffusers.ImagePipelineOutput) is returned, otherwise a `tuple` is returned
where the first element is a list with the generated images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.OmniGenPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L246</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.OmniGenPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L273</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.OmniGenPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L233</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.OmniGenPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L259</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_input_images</name><anchor>diffusers.OmniGenPipeline.encode_input_images</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/omnigen/pipeline_omnigen.py#L171</source><parameters>[{"name": "input_pixel_values", "val": ": typing.List[torch.Tensor]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "dtype", "val": ": typing.Optional[torch.dtype] = None"}]</parameters><paramsdesc>- **input_pixel_values** -- normalized pixel of input images
- **device** --</paramsdesc><paramgroups>0</paramgroups></docstring>

get the continue embedding of input images by VAE



Returns: torch.Tensor


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/omnigen.md" />

### ControlNet with Stable Diffusion 3
https://huggingface.co/docs/diffusers/main/api/pipelines/controlnet_sd3.md

# ControlNet with Stable Diffusion 3

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

StableDiffusion3ControlNetPipeline is an implementation of ControlNet for Stable Diffusion 3.

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This controlnet code is mainly implemented by [The InstantX Team](https://huggingface.co/InstantX). The inpainting-related code was developed by [The Alimama Creative Team](https://huggingface.co/alimama-creative). You can find pre-trained checkpoints for SD3-ControlNet in the table below:


| ControlNet type | Developer | Link |
| -------- | ---------- | ---- |
| Canny | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Canny) |
| Depth | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Depth) |
| Pose | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Pose) |
| Tile | [The InstantX Team](https://huggingface.co/InstantX) | [Link](https://huggingface.co/InstantX/SD3-Controlnet-Tile) |
| Inpainting | [The AlimamaCreative Team](https://huggingface.co/alimama-creative) | [link](https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting) |


> [!TIP]
> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.

## StableDiffusion3ControlNetPipeline[[diffusers.StableDiffusion3ControlNetPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusion3ControlNetPipeline</name><anchor>diffusers.StableDiffusion3ControlNetPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L143</source><parameters>[{"name": "transformer", "val": ": SD3Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5TokenizerFast"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel, typing.List[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel], diffusers.models.controlnets.controlnet_sd3.SD3MultiControlNetModel]"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.siglip.modeling_siglip.SiglipVisionModel] = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.siglip.image_processing_siglip.SiglipImageProcessor] = None"}]</parameters><paramsdesc>- **transformer** ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
  with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
  as its dimension.
- **text_encoder_2** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **text_encoder_3** (`T5EncoderModel`) --
  Frozen text-encoder. Stable Diffusion 3 uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_3** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **controlnet** ([SD3ControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sd3#diffusers.SD3ControlNetModel) or `List[SD3ControlNetModel]` or `SD3MultiControlNetModel`) --
  Provides additional conditioning to the `unet` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **image_encoder** (`SiglipVisionModel`, *optional*) --
  Pre-trained Vision Model for IP Adapter.
- **feature_extractor** (`SiglipImageProcessor`, *optional*) --
  Image processor for IP Adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusion3ControlNetPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L818</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "controlnet_pooled_projections", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`, --
  `List[List[torch.Tensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
  The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
  specified as `torch.Tensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be accepted
  as an image. The dimensions of the output image defaults to `image`'s dimensions. If height and/or
  width are passed, `image` is resized accordingly. If multiple ControlNets are specified in `init`,
  images must be passed as a list such that each element of the list can be correctly batched for input
  to a single ControlNet.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **controlnet_pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) --
  Embeddings projected from the embeddings of controlnet input conditions.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusion3ControlNetPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusion3ControlNetPipeline
>>> from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
>>> from diffusers.utils import load_image

>>> controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny", torch_dtype=torch.float16)

>>> pipe = StableDiffusion3ControlNetPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet, torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> control_image = load_image(
...     "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png"
... )
>>> prompt = "A bird in space"
>>> image = pipe(
...     prompt, control_image=control_image, height=1024, width=768, controlnet_conditioning_scale=0.7
... ).images[0]
>>> image.save("sd3.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_image</name><anchor>diffusers.StableDiffusion3ControlNetPipeline.encode_image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L741</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "device", "val": ": device"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  Input image to be encoded.
- **device** -- (`torch.device`):
  Torch device.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The encoded image feature representation.</retdesc></docstring>
Encodes the given image into a feature representation using a pre-trained image encoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusion3ControlNetPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L364</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_ip_adapter_image_embeds</name><anchor>diffusers.StableDiffusion3ControlNetPipeline.prepare_ip_adapter_image_embeds</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py#L761</source><parameters>[{"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}]</parameters><paramsdesc>- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  The input image to extract features from for IP-Adapter.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Precomputed image embeddings.
- **device** -- (`torch.device`, *optional*):
  Torch device.
- **num_images_per_prompt** (`int`, defaults to 1) --
  Number of images that should be generated per prompt.
- **do_classifier_free_guidance** (`bool`, defaults to True) --
  Whether to use classifier free guidance or not.</paramsdesc><paramgroups>0</paramgroups></docstring>
Prepares image embeddings for use in the IP-Adapter.

Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed.




</div></div>

## StableDiffusion3ControlNetInpaintingPipeline[[diffusers.StableDiffusion3ControlNetInpaintingPipeline]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusion3ControlNetInpaintingPipeline</name><anchor>diffusers.StableDiffusion3ControlNetInpaintingPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L164</source><parameters>[{"name": "transformer", "val": ": SD3Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5TokenizerFast"}, {"name": "controlnet", "val": ": typing.Union[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel, typing.List[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_sd3.SD3ControlNetModel], diffusers.models.controlnets.controlnet_sd3.SD3MultiControlNetModel]"}, {"name": "image_encoder", "val": ": SiglipModel = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.siglip.image_processing_siglip.SiglipImageProcessor] = None"}]</parameters><paramsdesc>- **transformer** ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
  with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
  as its dimension.
- **text_encoder_2** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **text_encoder_3** (`T5EncoderModel`) --
  Frozen text-encoder. Stable Diffusion 3 uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_3** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **controlnet** ([SD3ControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sd3#diffusers.SD3ControlNetModel) or `List[SD3ControlNetModel]` or `SD3MultiControlNetModel`) --
  Provides additional conditioning to the `transformer` during the denoising process. If you set multiple
  ControlNets as a list, the outputs from each ControlNet are added together to create one combined
  additional conditioning.
- **image_encoder** (`PreTrainedModel`, *optional*) --
  Pre-trained Vision Model for IP Adapter.
- **feature_extractor** (`BaseImageProcessor`, *optional*) --
  Image processor for IP Adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusion3ControlNetInpaintingPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L868</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "control_guidance_start", "val": ": typing.Union[float, typing.List[float]] = 0.0"}, {"name": "control_guidance_end", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "control_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "control_mask", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "controlnet_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "controlnet_pooled_projections", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **control_guidance_start** (`float` or `List[float]`, *optional*, defaults to 0.0) --
  The percentage of total steps at which the ControlNet starts applying.
- **control_guidance_end** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The percentage of total steps at which the ControlNet stops applying.
- **control_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be inpainted (which parts of the image to
  be masked out with `control_mask` and repainted according to `prompt`). For both numpy array and
  pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list or tensors, the
  expected shape should be `(B, C, H, W)`. If it is a numpy array or a list of arrays, the expected shape
  should be `(B, H, W, C)` or `(H, W, C)`.
- **control_mask** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`. And
  for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W, 1)`, or `(H, W)`.
- **controlnet_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
  to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
  the corresponding scale as a list.
- **controlnet_pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) --
  Embeddings projected from the embeddings of controlnet input conditions.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusion3ControlNetInpaintingPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers.utils import load_image, check_min_version
>>> from diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline
>>> from diffusers.models.controlnet_sd3 import SD3ControlNetModel

>>> controlnet = SD3ControlNetModel.from_pretrained(
...     "alimama-creative/SD3-Controlnet-Inpainting", use_safetensors=True, extra_conditioning_channels=1
... )
>>> pipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers",
...     controlnet=controlnet,
...     torch_dtype=torch.float16,
... )
>>> pipe.text_encoder.to(torch.float16)
>>> pipe.controlnet.to(torch.float16)
>>> pipe.to("cuda")

>>> image = load_image(
...     "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog.png"
... )
>>> mask = load_image(
...     "https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog_mask.png"
... )
>>> width = 1024
>>> height = 1024
>>> prompt = "A cat is sitting next to a puppy."
>>> generator = torch.Generator(device="cuda").manual_seed(24)
>>> res_image = pipe(
...     negative_prompt="deformed, distorted, disfigured, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, mutated hands and fingers, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, NSFW",
...     prompt=prompt,
...     height=height,
...     width=width,
...     control_image=image,
...     control_mask=mask,
...     num_inference_steps=28,
...     generator=generator,
...     controlnet_conditioning_scale=0.95,
...     guidance_scale=7,
... ).images[0]
>>> res_image.save(f"sd3.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_image</name><anchor>diffusers.StableDiffusion3ControlNetInpaintingPipeline.encode_image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L791</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "device", "val": ": device"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  Input image to be encoded.
- **device** -- (`torch.device`):
  Torch device.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The encoded image feature representation.</retdesc></docstring>
Encodes the given image into a feature representation using a pre-trained image encoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusion3ControlNetInpaintingPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L382</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_ip_adapter_image_embeds</name><anchor>diffusers.StableDiffusion3ControlNetInpaintingPipeline.prepare_ip_adapter_image_embeds</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py#L811</source><parameters>[{"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}]</parameters><paramsdesc>- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  The input image to extract features from for IP-Adapter.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Precomputed image embeddings.
- **device** -- (`torch.device`, *optional*):
  Torch device.
- **num_images_per_prompt** (`int`, defaults to 1) --
  Number of images that should be generated per prompt.
- **do_classifier_free_guidance** (`bool`, defaults to True) --
  Whether to use classifier free guidance or not.</paramsdesc><paramgroups>0</paramgroups></docstring>
Prepares image embeddings for use in the IP-Adapter.

Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed.




</div></div>

## StableDiffusion3PipelineOutput[[diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_sd3.md" />

### Safe Stable Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_safe.md

# Safe Stable Diffusion

Safe Stable Diffusion was proposed in [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://huggingface.co/papers/2211.05105) and mitigates inappropriate degeneration from Stable Diffusion models because they're trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content.

The abstract from the paper is:

*Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.*

## Tips

Use the `safety_concept` property of [StableDiffusionPipelineSafe](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_safe#diffusers.StableDiffusionPipelineSafe) to check and edit the current safety concept:

```python
>>> from diffusers import StableDiffusionPipelineSafe

>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
>>> pipeline.safety_concept
'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty'
```
For each image generation the active concept is also contained in `StableDiffusionSafePipelineOutput`.

There are 4 configurations (`SafetyConfig.WEAK`, `SafetyConfig.MEDIUM`, `SafetyConfig.STRONG`, and `SafetyConfig.MAX`) that can be applied:

```python
>>> from diffusers import StableDiffusionPipelineSafe
>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig

>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX)
```

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

## StableDiffusionPipelineSafe[[diffusers.StableDiffusionPipelineSafe]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPipelineSafe</name><anchor>diffusers.StableDiffusionPipelineSafe</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py#L32</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": SafeStableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection] = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPipelineSafe.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py#L520</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "sld_guidance_scale", "val": ": typing.Optional[float] = 1000"}, {"name": "sld_warmup_steps", "val": ": typing.Optional[int] = 10"}, {"name": "sld_threshold", "val": ": typing.Optional[float] = 0.01"}, {"name": "sld_momentum_scale", "val": ": typing.Optional[float] = 0.3"}, {"name": "sld_mom_beta", "val": ": typing.Optional[float] = 0.4"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **sld_guidance_scale** (`float`, *optional*, defaults to 1000) --
  If `sld_guidance_scale < 1`, safety guidance is disabled.
- **sld_warmup_steps** (`int`, *optional*, defaults to 10) --
  Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than
  `sld_warmup_steps`.
- **sld_threshold** (`float`, *optional*, defaults to 0.01) --
  Threshold that separates the hyperplane between appropriate and inappropriate images.
- **sld_momentum_scale** (`float`, *optional*, defaults to 0.3) --
  Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0,
  momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than
  `sld_warmup_steps`.
- **sld_mom_beta** (`float`, *optional*, defaults to 0.4) --
  Defines how safety guidance momentum builds up. `sld_mom_beta` indicates how much of the previous
  momentum is kept. Momentum is built up during warmup for diffusion steps smaller than
  `sld_warmup_steps`.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.







<ExampleCodeBlock anchor="diffusers.StableDiffusionPipelineSafe.__call__.example">

Examples:

```py
import torch
from diffusers import StableDiffusionPipelineSafe
from diffusers.pipelines.stable_diffusion_safe import SafetyConfig

pipeline = StableDiffusionPipelineSafe.from_pretrained(
    "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16
).to("cuda")
prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0]
```

</ExampleCodeBlock>


</div></div>

## StableDiffusionSafePipelineOutput[[diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_safe/pipeline_output.py#L13</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}, {"name": "unsafe_images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType]"}, {"name": "applied_safety_concept", "val": ": typing.Optional[str]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
  num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
- **nsfw_content_detected** (`List[bool]`) --
  List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work"
  (nsfw) content, or `None` if safety checking could not be performed.
- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images that were flagged by the safety checker any may contain "not-safe-for-work"
  (nsfw) content, or `None` if no safety check was performed or no images were flagged.
- **applied_safety_concept** (`str`) --
  The safety concept that was applied for safety guidance, or `None` if safety guidance was disabled</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Safe Stable Diffusion pipelines.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput.__call__</anchor><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Call self as a function.

</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_safe.md" />

### Image variation
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/image_variation.md

# Image variation

The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by [Justin Pinkney](https://www.justinpinkney.com/) from [Lambda](https://lambdalabs.com/).

The original codebase can be found at [LambdaLabsML/lambda-diffusers](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) and additional official checkpoints for image variation can be found at [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers).

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](./overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

## StableDiffusionImageVariationPipeline[[diffusers.StableDiffusionImageVariationPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionImageVariationPipeline</name><anchor>diffusers.StableDiffusionImageVariationPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py#L44</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **image_encoder** ([CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection)) --
  Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline to generate image variations from an input image using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionImageVariationPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py#L259</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor]"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **image** (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.Tensor`) --
  Image or images to guide image generation. If you provide a tensor, it needs to be compatible with
  [`CLIPImageProcessor`](https://huggingface.co/lambdalabs/sd-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json).
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.







<ExampleCodeBlock anchor="diffusers.StableDiffusionImageVariationPipeline.__call__.example">

Examples:

```py
from diffusers import StableDiffusionImageVariationPipeline
from PIL import Image
from io import BytesIO
import requests

pipe = StableDiffusionImageVariationPipeline.from_pretrained(
    "lambdalabs/sd-image-variations-diffusers", revision="v2.0"
)
pipe = pipe.to("cuda")

url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200"

response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")

out = pipe(image, num_images_per_prompt=3, guidance_scale=15)
out["images"][0].save("result.jpg")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionImageVariationPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionImageVariationPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionImageVariationPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionImageVariationPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionImageVariationPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionImageVariationPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/image_variation.md" />

### Super-resolution
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/upscale.md

# Super-resolution

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/). It is used to enhance the resolution of input images by a factor of 4.

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## StableDiffusionUpscalePipeline[[diffusers.StableDiffusionUpscalePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionUpscalePipeline</name><anchor>diffusers.StableDiffusionUpscalePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py#L82</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "low_res_scheduler", "val": ": DDPMScheduler"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": typing.Optional[typing.Any] = None"}, {"name": "feature_extractor", "val": ": typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None"}, {"name": "watermarker", "val": ": typing.Optional[typing.Any] = None"}, {"name": "max_noise_level", "val": ": int = 350"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **low_res_scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of
  [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler).
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image super-resolution using Stable Diffusion 2.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionUpscalePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py#L548</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 75"}, {"name": "guidance_scale", "val": ": float = 9.0"}, {"name": "noise_level", "val": ": int = 20"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image` or tensor representing an image batch to be upscaled.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionUpscalePipeline.__call__.example">

Examples:
```py
>>> import requests
>>> from PIL import Image
>>> from io import BytesIO
>>> from diffusers import StableDiffusionUpscalePipeline
>>> import torch

>>> # load model and scheduler
>>> model_id = "stabilityai/stable-diffusion-x4-upscaler"
>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained(
...     model_id, variant="fp16", torch_dtype=torch.float16
... )
>>> pipeline = pipeline.to("cuda")

>>> # let's download an  image
>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
>>> response = requests.get(url)
>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
>>> low_res_img = low_res_img.resize((128, 128))
>>> prompt = "a white cat"

>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
>>> upscaled_image.save("upsampled_cat.png")
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionUpscalePipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionUpscalePipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionUpscalePipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionUpscalePipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionUpscalePipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionUpscalePipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionUpscalePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py#L221</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/upscale.md" />

### Inpainting
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/inpaint.md

# Inpainting

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion.

## Tips

It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such
as [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting). Default
text-to-image Stable Diffusion checkpoints, such as
[stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) are also compatible but they might be less performant.

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## StableDiffusionInpaintPipeline[[diffusers.StableDiffusionInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionInpaintPipeline</name><anchor>diffusers.StableDiffusionInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py#L128</source><parameters>[{"name": "vae", "val": ": typing.Union[diffusers.models.autoencoders.autoencoder_kl.AutoencoderKL, diffusers.models.autoencoders.autoencoder_asym_kl.AsymmetricAutoencoderKL]"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([`AutoencoderKL`, `AsymmetricAutoencoderKL`]) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image inpainting using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py#L880</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 1.0"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be inpainted (which parts of the image to
  be masked out with `mask_image` and repainted according to `prompt`). For both numpy array and pytorch
  tensor, the expected value range is between `[0, 1]` If it's a tensor or a list or tensors, the
  expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a list of arrays, the
  expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image latents as `image`, but
  if passing latents directly it is not encoded again.
- **mask_image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to mask `image`. White pixels in the mask
  are repainted while black pixels are preserved. If `mask_image` is a PIL image, it is converted to a
  single channel (luminance) before use. If it's a numpy array or pytorch tensor, it should contain one
  color channel (L) instead of 3, so the expected shape for pytorch tensor would be `(B, 1, H, W)`, `(B,
  H, W)`, `(1, H, W)`, `(H, W)`. And for numpy array would be for `(B, H, W, 1)`, `(B, H, W)`, `(H, W,
  1)`, or `(H, W)`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionInpaintPipeline.__call__.example">

Examples:

```py
>>> import PIL
>>> import requests
>>> import torch
>>> from io import BytesIO

>>> from diffusers import StableDiffusionInpaintPipeline


>>> def download_image(url):
...     response = requests.get(url)
...     return PIL.Image.open(BytesIO(response.content)).convert("RGB")


>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

>>> init_image = download_image(img_url).resize((512, 512))
>>> mask_image = download_image(mask_url).resize((512, 512))

>>> pipe = StableDiffusionInpaintPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-inpainting", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionInpaintPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionInpaintPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionInpaintPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionInpaintPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionInpaintPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionInpaintPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionInpaintPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionInpaintPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionInpaintPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.StableDiffusionInpaintPipeline.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.StableDiffusionInpaintPipeline.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py#L312</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionInpaintPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py#L823</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/inpaint.md" />

### Stable Diffusion pipelines
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/overview.md

# Stable Diffusion pipelines

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.

Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.

For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI [announcement](https://stability.ai/blog/stable-diffusion-announcement) and our own [blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) for more technical details.

You can find the original codebase for Stable Diffusion v1.0 at [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) and Stable Diffusion v2.0 at [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion) as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations. Explore these organizations to find the best checkpoint for your use-case!

The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo:

<div class="flex justify-center">
    <div class="rounded-xl border border-gray-200">
    <table class="min-w-full divide-y-2 divide-gray-200 bg-white text-sm">
        <thead>
        <tr>
            <th class="px-4 py-2 font-medium text-gray-900 text-left">
            Pipeline
            </th>
            <th class="px-4 py-2 font-medium text-gray-900 text-left">
            Supported tasks
            </th>
            <th class="px-4 py-2 font-medium text-gray-900 text-left">
            🤗 Space
            </th>
        </tr>
        </thead>
        <tbody class="divide-y divide-gray-200">
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./text2img">StableDiffusion</a>
            </td>
            <td class="px-4 py-2 text-gray-700">text-to-image</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./img2img">StableDiffusionImg2Img</a>
            </td>
            <td class="px-4 py-2 text-gray-700">image-to-image</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface/diffuse-the-rest"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./inpaint">StableDiffusionInpaint</a>
            </td>
            <td class="px-4 py-2 text-gray-700">inpainting</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./depth2img">StableDiffusionDepth2Img</a>
            </td>
            <td class="px-4 py-2 text-gray-700">depth-to-image</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/radames/stable-diffusion-depth2img"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./image_variation">StableDiffusionImageVariation</a>
            </td>
            <td class="px-4 py-2 text-gray-700">image variation</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./stable_diffusion_safe">StableDiffusionPipelineSafe</a>
            </td>
            <td class="px-4 py-2 text-gray-700">filtered text-to-image</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./stable_diffusion_2">StableDiffusion2</a>
            </td>
            <td class="px-4 py-2 text-gray-700">text-to-image, inpainting, depth-to-image, super-resolution</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./stable_diffusion_xl">StableDiffusionXL</a>
            </td>
            <td class="px-4 py-2 text-gray-700">text-to-image, image-to-image</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/RamAnanth1/stable-diffusion-xl"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./latent_upscale">StableDiffusionLatentUpscale</a>
            </td>
            <td class="px-4 py-2 text-gray-700">super-resolution</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface-projects/stable-diffusion-latent-upscaler"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./upscale">StableDiffusionUpscale</a>
            </td>
            <td class="px-4 py-2 text-gray-700">super-resolution</td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./ldm3d_diffusion">StableDiffusionLDM3D</a>
            </td>
            <td class="px-4 py-2 text-gray-700">text-to-rgb, text-to-depth, text-to-pano</td>
            <td class="px-4 py-2"><a href="https://huggingface.co/spaces/r23/ldm3d-space"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a>
            </td>
        </tr>
        <tr>
            <td class="px-4 py-2 text-gray-700">
            <a href="./ldm3d_diffusion">StableDiffusionUpscaleLDM3D</a>
            </td>
            <td class="px-4 py-2 text-gray-700">ldm3d super-resolution</td>
        </tr>
        </tbody>
    </table>
    </div>
</div>

## Tips

To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines.

### Explore tradeoff between speed and quality

[StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) uses the [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler) by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) instead of the default:

```py
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

# or
euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)
```

### Reuse pipeline components to save memory

To save memory and use the same components across multiple pipelines, use the `.components` method to avoid loading weights into RAM more than once.

```py
from diffusers import (
    StableDiffusionPipeline,
    StableDiffusionImg2ImgPipeline,
    StableDiffusionInpaintPipeline,
)

text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
inpaint = StableDiffusionInpaintPipeline(**text2img.components)

# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
```

### Create web demos using `gradio`

The Stable Diffusion pipelines are automatically supported in [Gradio](https://github.com/gradio-app/gradio/), a library that makes creating beautiful and user-friendly machine learning apps on the web a breeze. First, make sure you have Gradio installed:

```sh
pip install -U gradio
```

Then, create a web demo around any Stable Diffusion-based pipeline. For example, you can create an image generation pipeline in a single line of code with Gradio's [`Interface.from_pipeline`](https://www.gradio.app/docs/interface#interface-from-pipeline) function:

```py
from diffusers import StableDiffusionPipeline
import gradio as gr

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

gr.Interface.from_pipeline(pipe).launch()
```

which opens an intuitive drag-and-drop interface in your browser:

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gradio-panda.png)

Similarly, you could create a demo for an image-to-image pipeline with:

```py
from diffusers import StableDiffusionImg2ImgPipeline
import gradio as gr


pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")

gr.Interface.from_pipeline(pipe).launch()
```

By default, the web demo runs on a local server. If you'd like to share it with others, you can generate a temporary public
link by setting `share=True` in `launch()`. Or, you can host your demo on [Hugging Face Spaces](https://huggingface.co/spaces)https://huggingface.co/spaces for a permanent link.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/overview.md" />

### Stable Diffusion 3
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_3.md

# Stable Diffusion 3

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

Stable Diffusion 3 (SD3) was proposed in [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://huggingface.co/papers/2403.03206) by Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Muller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach.

The abstract from the paper is:

*Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a recent generative model formulation that connects data and noise in a straight line. Despite its better theoretical properties and conceptual simplicity, it is not yet decisively established as standard practice. In this work, we improve existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales. Through a large-scale study, we demonstrate the superior performance of this approach compared to established diffusion formulations for high-resolution text-to-image synthesis. Additionally, we present a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens, improving text comprehension typography, and human preference ratings. We demonstrate that this architecture follows predictable scaling trends and correlates lower validation loss to improved text-to-image synthesis as measured by various metrics and human evaluations.*


## Usage Example

_As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._

Use the command below to log in:

```bash
hf auth login
```

> [!TIP]
> The SD3 pipeline uses three text encoders to generate an image. Model offloading is necessary in order for it to run on most commodity hardware. Please use the `torch.float16` data type for additional memory savings.

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)
pipe.to("cuda")

image = pipe(
    prompt="a photo of a cat holding a sign that says hello world",
    negative_prompt="",
    num_inference_steps=28,
    height=1024,
    width=1024,
    guidance_scale=7.0,
).images[0]

image.save("sd3_hello_world.png")
```

**Note:** Stable Diffusion 3.5 can also be run using the SD3 pipeline, and all mentioned optimizations and techniques apply to it as well. In total there are three official models in the SD3 family:
- [`stabilityai/stable-diffusion-3-medium-diffusers`](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)
- [`stabilityai/stable-diffusion-3.5-large`](https://huggingface.co/stabilityai/stable-diffusion-3-5-large)
- [`stabilityai/stable-diffusion-3.5-large-turbo`](https://huggingface.co/stabilityai/stable-diffusion-3-5-large-turbo)

## Image Prompting with IP-Adapters

An IP-Adapter lets you prompt SD3 with images, in addition to the text prompt. This is especially useful when describing complex concepts that are difficult to articulate through text alone and you have reference images. To load and use an IP-Adapter, you need:

- `image_encoder`: Pre-trained vision model used to obtain image features, usually a CLIP image encoder.
- `feature_extractor`: Image processor that prepares the input image for the chosen `image_encoder`.
- `ip_adapter_id`: Checkpoint containing parameters of image cross attention layers and image projection. 

IP-Adapters are trained for a specific model architecture, so they also work in finetuned variations of the base model. You can use the `~SD3IPAdapterMixin.set_ip_adapter_scale` function to adjust how strongly the output aligns with the image prompt. The higher the value, the more closely the model follows the image prompt. A default value of 0.5 is typically a good balance, ensuring the model considers both the text and image prompts equally.

```python
import torch
from PIL import Image

from diffusers import StableDiffusion3Pipeline
from transformers import SiglipVisionModel, SiglipImageProcessor

image_encoder_id = "google/siglip-so400m-patch14-384"
ip_adapter_id = "InstantX/SD3.5-Large-IP-Adapter"

feature_extractor = SiglipImageProcessor.from_pretrained(
    image_encoder_id,
    torch_dtype=torch.float16
)
image_encoder = SiglipVisionModel.from_pretrained(
    image_encoder_id,
    torch_dtype=torch.float16
).to( "cuda")

pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3.5-large",
    torch_dtype=torch.float16,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
).to("cuda")

pipe.load_ip_adapter(ip_adapter_id)
pipe.set_ip_adapter_scale(0.6)

ref_img = Image.open("image.jpg").convert('RGB')

image = pipe(
    width=1024,
    height=1024,
    prompt="a cat",
    negative_prompt="lowres, low quality, worst quality",
    num_inference_steps=24,
    guidance_scale=5.0,
    ip_adapter_image=ref_img
).images[0]

image.save("result.jpg")
```

<div class="justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sd3_ip_adapter_example.png"/>
    <figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "a cat"</figcaption>
</div>


> [!TIP]
> Check out [IP-Adapter](../../../using-diffusers/ip_adapter) to learn more about how IP-Adapters work.


## Memory Optimisations for SD3

SD3 uses three text encoders, one of which is the very large T5-XXL model. This makes it challenging to run the model on GPUs with less than 24GB of VRAM, even when using `fp16` precision. The following section outlines a few memory optimizations in Diffusers that make it easier to run SD3 on low resource hardware.

### Running Inference with Model Offloading

The most basic memory optimization available in Diffusers allows you to offload the components of the model to CPU during inference in order to save memory, while seeing a slight increase in inference latency. Model offloading will only move a model component onto the GPU when it needs to be executed, while keeping the remaining components on the CPU.

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()

image = pipe(
    prompt="a photo of a cat holding a sign that says hello world",
    negative_prompt="",
    num_inference_steps=28,
    height=1024,
    width=1024,
    guidance_scale=7.0,
).images[0]

image.save("sd3_hello_world.png")
```

### Dropping the T5 Text Encoder during Inference

Removing the memory-intensive 4.7B parameter T5-XXL text encoder during inference can significantly decrease the memory requirements for SD3 with only a slight loss in performance.

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    text_encoder_3=None,
    tokenizer_3=None,
    torch_dtype=torch.float16
)
pipe.to("cuda")

image = pipe(
    prompt="a photo of a cat holding a sign that says hello world",
    negative_prompt="",
    num_inference_steps=28,
    height=1024,
    width=1024,
    guidance_scale=7.0,
).images[0]

image.save("sd3_hello_world-no-T5.png")
```

### Using a Quantized Version of the T5 Text Encoder

We can leverage the `bitsandbytes` library to load and quantize the T5-XXL text encoder to 8-bit precision. This allows you to keep using all three text encoders while only slightly impacting performance.

First install the `bitsandbytes` library.

```shell
pip install bitsandbytes
```

Then load the T5-XXL model using the `BitsAndBytesConfig`.

```python
import torch
from diffusers import StableDiffusion3Pipeline
from transformers import T5EncoderModel, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_8bit=True)

model_id = "stabilityai/stable-diffusion-3-medium-diffusers"
text_encoder = T5EncoderModel.from_pretrained(
    model_id,
    subfolder="text_encoder_3",
    quantization_config=quantization_config,
)
pipe = StableDiffusion3Pipeline.from_pretrained(
    model_id,
    text_encoder_3=text_encoder,
    device_map="balanced",
    torch_dtype=torch.float16
)

image = pipe(
    prompt="a photo of a cat holding a sign that says hello world",
    negative_prompt="",
    num_inference_steps=28,
    height=1024,
    width=1024,
    guidance_scale=7.0,
).images[0]

image.save("sd3_hello_world-8bit-T5.png")
```

You can find the end-to-end script [here](https://gist.github.com/sayakpaul/82acb5976509851f2db1a83456e504f1).

## Performance Optimizations for SD3

### Using Torch Compile to Speed Up Inference

Using compiled components in the SD3 pipeline can speed up inference by as much as 4X. The following code snippet demonstrates how to compile the Transformer and VAE components of the SD3 pipeline.

```python
import torch
from diffusers import StableDiffusion3Pipeline

torch.set_float32_matmul_precision("high")

torch._inductor.config.conv_1x1_as_mm = True
torch._inductor.config.coordinate_descent_tuning = True
torch._inductor.config.epilogue_fusion = False
torch._inductor.config.coordinate_descent_check_all_directions = True

pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    torch_dtype=torch.float16
).to("cuda")
pipe.set_progress_bar_config(disable=True)

pipe.transformer.to(memory_format=torch.channels_last)
pipe.vae.to(memory_format=torch.channels_last)

pipe.transformer = torch.compile(pipe.transformer, mode="max-autotune", fullgraph=True)
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True)

# Warm Up
prompt = "a photo of a cat holding a sign that says hello world"
for _ in range(3):
    _ = pipe(prompt=prompt, generator=torch.manual_seed(1))

# Run Inference
image = pipe(prompt=prompt, generator=torch.manual_seed(1)).images[0]
image.save("sd3_hello_world.png")
```

Check out the full script [here](https://gist.github.com/sayakpaul/508d89d7aad4f454900813da5d42ca97).

## Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the [Quantization](../../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline) for inference with bitsandbytes.

```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SD3Transformer2DModel, StableDiffusion3Pipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel

quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
    "stabilityai/stable-diffusion-3.5-large",
    subfolder="text_encoder_3",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = SD3Transformer2DModel.from_pretrained(
    "stabilityai/stable-diffusion-3.5-large",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

pipeline = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3.5-large",
    text_encoder=text_encoder_8bit,
    transformer=transformer_8bit,
    torch_dtype=torch.float16,
    device_map="balanced",
)

prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt, num_inference_steps=28, guidance_scale=7.0).images[0]
image.save("sd3.png")
```

## Using Long Prompts with the T5 Text Encoder

By default, the T5 Text Encoder prompt uses a maximum sequence length of `256`. This can be adjusted by setting the `max_sequence_length` to accept fewer or more tokens. Keep in mind that longer sequences require additional resources and result in longer generation times, such as during batch inference.

```python
prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature’s body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree.  As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight"

image = pipe(
    prompt=prompt,
    negative_prompt="",
    num_inference_steps=28,
    guidance_scale=4.5,
    max_sequence_length=512,
).images[0]
```

### Sending a different prompt to the T5 Text Encoder

You can send a different prompt to the CLIP Text Encoders and the T5 Text Encoder to prevent the prompt from being truncated by the CLIP Text Encoders and to improve generation.

> [!TIP]
> The prompt with the CLIP Text Encoders is still truncated to the 77 token limit.

```python
prompt = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. A river of warm, melted butter, pancake-like foliage in the background, a towering pepper mill standing in for a tree."

prompt_3 = "A whimsical and creative image depicting a hybrid creature that is a mix of a waffle and a hippopotamus, basking in a river of melted butter amidst a breakfast-themed landscape. It features the distinctive, bulky body shape of a hippo. However, instead of the usual grey skin, the creature’s body resembles a golden-brown, crispy waffle fresh off the griddle. The skin is textured with the familiar grid pattern of a waffle, each square filled with a glistening sheen of syrup. The environment combines the natural habitat of a hippo with elements of a breakfast table setting, a river of warm, melted butter, with oversized utensils or plates peeking out from the lush, pancake-like foliage in the background, a towering pepper mill standing in for a tree.  As the sun rises in this fantastical world, it casts a warm, buttery glow over the scene. The creature, content in its butter river, lets out a yawn. Nearby, a flock of birds take flight"

image = pipe(
    prompt=prompt,
    prompt_3=prompt_3,
    negative_prompt="",
    num_inference_steps=28,
    guidance_scale=4.5,
    max_sequence_length=512,
).images[0]
```

## Tiny AutoEncoder for Stable Diffusion 3

Tiny AutoEncoder for Stable Diffusion (TAESD3) is a tiny distilled version of Stable Diffusion 3's VAE by [Ollin Boer Bohan](https://github.com/madebyollin/taesd) that can decode [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline) latents almost instantly.

To use with Stable Diffusion 3:

```python
import torch
from diffusers import StableDiffusion3Pipeline, AutoencoderTiny

pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd3", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("cheesecake.png")
```

## Loading the original checkpoints via `from_single_file`

The `SD3Transformer2DModel` and `StableDiffusion3Pipeline` classes support loading the original checkpoints via the `from_single_file` method. This method allows you to load the original checkpoint files that were used to train the models.

## Loading the original checkpoints for the `SD3Transformer2DModel`

```python
from diffusers import SD3Transformer2DModel

model = SD3Transformer2DModel.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors")
```

## Loading the single checkpoint for the `StableDiffusion3Pipeline`

### Loading the single file checkpoint without T5

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors",
    torch_dtype=torch.float16,
    text_encoder_3=None
)
pipe.enable_model_cpu_offload()

image = pipe("a picture of a cat holding a sign that says hello world").images[0]
image.save('sd3-single-file.png')
```

### Loading the single file checkpoint with T5

> [!TIP]
> The following example loads a checkpoint stored in a 8-bit floating point format which requires PyTorch 2.3 or later.

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips_t5xxlfp8.safetensors",
    torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()

image = pipe("a picture of a cat holding a sign that says hello world").images[0]
image.save('sd3-single-file-t5-fp8.png')
```

### Loading the single file checkpoint for the Stable Diffusion 3.5 Transformer Model

```python
import torch
from diffusers import SD3Transformer2DModel, StableDiffusion3Pipeline

transformer = SD3Transformer2DModel.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/blob/main/sd3.5_large.safetensors",
    torch_dtype=torch.bfloat16,
)
pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3.5-large",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
image = pipe("a cat holding a sign that says hello world").images[0]
image.save("sd35.png")
```

## StableDiffusion3Pipeline[[diffusers.StableDiffusion3Pipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusion3Pipeline</name><anchor>diffusers.StableDiffusion3Pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L148</source><parameters>[{"name": "transformer", "val": ": SD3Transformer2DModel"}, {"name": "scheduler", "val": ": FlowMatchEulerDiscreteScheduler"}, {"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "text_encoder_3", "val": ": T5EncoderModel"}, {"name": "tokenizer_3", "val": ": T5TokenizerFast"}, {"name": "image_encoder", "val": ": SiglipVisionModel = None"}, {"name": "feature_extractor", "val": ": SiglipImageProcessor = None"}]</parameters><paramsdesc>- **transformer** ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)) --
  Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- **scheduler** ([FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler)) --
  A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
  with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
  as its dimension.
- **text_encoder_2** (`CLIPTextModelWithProjection`) --
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **text_encoder_3** (`T5EncoderModel`) --
  Frozen text-encoder. Stable Diffusion 3 uses
  [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
  [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_3** (`T5TokenizerFast`) --
  Tokenizer of class
  [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
- **image_encoder** (`SiglipVisionModel`, *optional*) --
  Pre-trained Vision Model for IP Adapter.
- **feature_extractor** (`SiglipImageProcessor`, *optional*) --
  Image processor for IP Adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusion3Pipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L772</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 28"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "guidance_scale", "val": ": float = 7.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "skip_guidance_layers", "val": ": typing.List[int] = None"}, {"name": "skip_layer_guidance_scale", "val": ": float = 2.8"}, {"name": "skip_layer_guidance_stop", "val": ": float = 0.2"}, {"name": "skip_layer_guidance_start", "val": ": float = 0.01"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  will be used instead
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  will be used instead
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used instead
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used instead
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.FloatTensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
  emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
  `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` instead of
  a plain tuple.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.
- **max_sequence_length** (`int` defaults to 256) -- Maximum sequence length to use with the `prompt`.
- **skip_guidance_layers** (`List[int]`, *optional*) --
  A list of integers that specify layers to skip during guidance. If not provided, all layers will be
  used for guidance. If provided, the guidance will only be applied to the layers specified in the list.
  Recommended value by StabiltyAI for Stable Diffusion 3.5 Medium is [7, 8, 9].
- **skip_layer_guidance_scale** (`int`, *optional*) -- The scale of the guidance for the layers specified in
  `skip_guidance_layers`. The guidance will be applied to the layers specified in `skip_guidance_layers`
  with a scale of `skip_layer_guidance_scale`. The guidance will be applied to the rest of the layers
  with a scale of `1`.
- **skip_layer_guidance_stop** (`int`, *optional*) -- The step at which the guidance for the layers specified in
  `skip_guidance_layers` will stop. The guidance will be applied to the layers specified in
  `skip_guidance_layers` until the fraction specified in `skip_layer_guidance_stop`. Recommended value by
  StabiltyAI for Stable Diffusion 3.5 Medium is 0.2.
- **skip_layer_guidance_start** (`int`, *optional*) -- The step at which the guidance for the layers specified in
  `skip_guidance_layers` will start. The guidance will be applied to the layers specified in
  `skip_guidance_layers` from the fraction specified in `skip_layer_guidance_start`. Recommended value by
  StabiltyAI for Stable Diffusion 3.5 Medium is 0.01.
- **mu** (`float`, *optional*) -- `mu` value used for `dynamic_shifting`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusion3Pipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusion3Pipeline

>>> pipe = StableDiffusion3Pipeline.from_pretrained(
...     "stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
... )
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> image = pipe(prompt).images[0]
>>> image.save("sd3.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_image</name><anchor>diffusers.StableDiffusion3Pipeline.encode_image</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L696</source><parameters>[{"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]]"}, {"name": "device", "val": ": device"}]</parameters><paramsdesc>- **image** (`PipelineImageInput`) --
  Input image to be encoded.
- **device** -- (`torch.device`):
  Torch device.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The encoded image feature representation.</retdesc></docstring>
Encodes the given image into a feature representation using a pre-trained image encoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusion3Pipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L344</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "prompt_3", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_3", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.FloatTensor] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "max_sequence_length", "val": ": int = 256"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in all text-encoders
- **prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
  used in all text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
- **negative_prompt_3** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
  `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.</paramsdesc><paramgroups>0</paramgroups></docstring>





</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_ip_adapter_image_embeds</name><anchor>diffusers.StableDiffusion3Pipeline.prepare_ip_adapter_image_embeds</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L716</source><parameters>[{"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}]</parameters><paramsdesc>- **ip_adapter_image** (`PipelineImageInput`, *optional*) --
  The input image to extract features from for IP-Adapter.
- **ip_adapter_image_embeds** (`torch.Tensor`, *optional*) --
  Precomputed image embeddings.
- **device** -- (`torch.device`, *optional*):
  Torch device.
- **num_images_per_prompt** (`int`, defaults to 1) --
  Number of images that should be generated per prompt.
- **do_classifier_free_guidance** (`bool`, defaults to True) --
  Whether to use classifier free guidance or not.</paramsdesc><paramgroups>0</paramgroups></docstring>
Prepares image embeddings for use in the IP-Adapter.

Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md" />

### Text-to-image
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/text2img.md

# Text-to-image

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The Stable Diffusion model was created by researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [Runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) is capable of generating photorealistic images given any text input. It's trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.

The abstract from the paper is:

*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion.*

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## StableDiffusionPipeline[[diffusers.StableDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionPipeline</name><anchor>diffusers.StableDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L778</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.StableDiffusionPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2206</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.StableDiffusionPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2220</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_single_file</name><anchor>diffusers.StableDiffusionPipeline.from_single_file</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file.py#L271</source><parameters>[{"name": "pretrained_model_link_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_link_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:
  - A link to the `.ckpt` file (for example
    `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"`) on the Hub.
  - A path to a *file* containing all pipeline weights.
- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **original_config_file** (`str`, *optional*) --
  The path to the original config file that was used to train the model. If not provided, the config file
  will be inferred from the checkpoint file.
- **config** (`str`, *optional*) --
  Can be either:
  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing the pipeline
    component configs in Diffusers format.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) from pretrained pipeline weights saved in the `.ckpt` or `.safetensors`
format. The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.StableDiffusionPipeline.from_single_file.example">

Examples:

```py
>>> from diffusers import StableDiffusionPipeline

>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )

>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly.ckpt")

>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
...     torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.StableDiffusionPipeline.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.StableDiffusionPipeline.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L332</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L717</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/text2img.md" />

### K-Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/k_diffusion.md

# K-Diffusion

[k-diffusion](https://github.com/crowsonkb/k-diffusion) is a popular library created by [Katherine Crowson](https://github.com/crowsonkb/). We provide `StableDiffusionKDiffusionPipeline` and `StableDiffusionXLKDiffusionPipeline` that allow you to run Stable DIffusion with samplers from k-diffusion.

Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers [here](https://huggingface.co/docs/diffusers/api/schedulers/overview)


## StableDiffusionKDiffusionPipeline[[diffusers.StableDiffusionKDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionKDiffusionPipeline</name><anchor>diffusers.StableDiffusionKDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_k_diffusion.py#L66</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": typing.Union[transformers.models.clip.tokenization_clip.CLIPTokenizer, transformers.models.clip.tokenization_clip_fast.CLIPTokenizerFast]"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please, refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  details.
- **feature_extractor** (`CLIPImageProcessor`) --
  Model that extracts features from generated images to be used as inputs for the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights

> [!WARNING] > This is an experimental pipeline and is likely to change in the future.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionKDiffusionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_k_diffusion.py#L202</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionXLKDiffusionPipeline[[diffusers.StableDiffusionXLKDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLKDiffusionPipeline</name><anchor>diffusers.StableDiffusionXLKDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L90</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLKDiffusionPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L206</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/k_diffusion.md" />

### Latent upscaler
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/latent_upscale.md

# Latent upscaler

The Stable Diffusion latent upscaler model was created by [Katherine Crowson](https://github.com/crowsonkb/k-diffusion) in collaboration with [Stability AI](https://stability.ai/). It is used to enhance the output image resolution by a factor of 2 (see this demo [notebook](https://colab.research.google.com/drive/1o1qYJcFeywzCIdkfKJy7cTpgZTCM2EI4) for a demonstration of the original implementation).

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## StableDiffusionLatentUpscalePipeline[[diffusers.StableDiffusionLatentUpscalePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionLatentUpscalePipeline</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py#L84</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": EulerDiscreteScheduler"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) to be used in combination with `unet` to denoise the encoded image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py#L396</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "num_inference_steps", "val": ": int = 75"}, {"name": "guidance_scale", "val": ": float = 9.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`) --
  The prompt or prompts to guide image upscaling.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image` or tensor representing an image batch to be upscaled. If it's a tensor, it can be either a
  latent output from a Stable Diffusion model or an image tensor in the range `[-1, 1]`. It is considered
  a `latent` if `image.shape[1]` is `4`; otherwise, it is considered to be an image representation and
  encoded using this pipeline's `vae` encoder.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionLatentUpscalePipeline.__call__.example">

Examples:
```py
>>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline
>>> import torch


>>> pipeline = StableDiffusionPipeline.from_pretrained(
...     "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
... )
>>> pipeline.to("cuda")

>>> model_id = "stabilityai/sd-x2-latent-upscaler"
>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
>>> upscaler.to("cuda")

>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic"
>>> generator = torch.manual_seed(33)

>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images

>>> with torch.no_grad():
...     image = pipeline.decode_latents(low_res_latents)
>>> image = pipeline.numpy_to_pil(image)[0]

>>> image.save("../images/a1.png")

>>> upscaled_image = upscaler(
...     prompt=prompt,
...     image=low_res_latents,
...     num_inference_steps=20,
...     guidance_scale=0,
...     generator=generator,
... ).images[0]

>>> upscaled_image.save("../images/a2.png")
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_sequential_cpu_offload</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.enable_sequential_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1266</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters><paramsdesc>- **gpu_id** (`int`, *optional*) --
  The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.
- **device** (`torch.Device` or `str`, *optional*, defaults to None) --
  The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
  automatically detect the available accelerator and use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state
dicts of all `torch.nn.Module` components (except those in `self._exclude_from_cpu_offload`) are saved to CPU
and then moved to `torch.device('meta')` and loaded to accelerator only when their specific submodule has its
`forward` method called. Offloading happens on a submodule basis. Memory savings are higher than with
`enable_model_cpu_offload`, but performance is lower.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionLatentUpscalePipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionLatentUpscalePipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionLatentUpscalePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py#L166</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `list(int)`) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`) --
  The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
  if `guidance_scale` is less than `1`).
- **prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.FloatTensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.md" />

### Image-to-image
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/img2img.md

# Image-to-image

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images.

The [StableDiffusionImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/img2img#diffusers.StableDiffusionImg2ImgPipeline) uses the diffusion-denoising mechanism proposed in [SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations](https://huggingface.co/papers/2108.01073) by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon.

The abstract from the paper is:

*Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing.*

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

## StableDiffusionImg2ImgPipeline[[diffusers.StableDiffusionImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionImg2ImgPipeline</name><anchor>diffusers.StableDiffusionImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L182</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided image-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L858</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": int = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image`, numpy array or tensor representing an image batch to be used as the starting point. For both
  numpy array and pytorch tensor, the expected value range is between `[0, 1]` If it's a tensor or a list
  or tensors, the expected shape should be `(B, C, H, W)` or `(C, H, W)`. If it is a numpy array or a
  list of arrays, the expected shape should be `(B, H, W, C)` or `(H, W, C)` It can also accept image
  latents as `image`, but if passing latents directly it is not encoded again.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO

>>> from diffusers import StableDiffusionImg2ImgPipeline

>>> device = "cuda"
>>> model_id_or_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))

>>> prompt = "A fantasy landscape, trending on artstation"

>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
>>> images[0].save("fantasy_landscape.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_single_file</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.from_single_file</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file.py#L271</source><parameters>[{"name": "pretrained_model_link_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_link_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:
  - A link to the `.ckpt` file (for example
    `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"`) on the Hub.
  - A path to a *file* containing all pipeline weights.
- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **original_config_file** (`str`, *optional*) --
  The path to the original config file that was used to train the model. If not provided, the config file
  will be inferred from the checkpoint file.
- **config** (`str`, *optional*) --
  Can be either:
  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing the pipeline
    component configs in Diffusers format.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) from pretrained pipeline weights saved in the `.ckpt` or `.safetensors`
format. The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.StableDiffusionImg2ImgPipeline.from_single_file.example">

Examples:

```py
>>> from diffusers import StableDiffusionPipeline

>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )

>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly.ckpt")

>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
...     torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L358</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L801</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/img2img.md" />

### Stable Diffusion 2
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_2.md

# Stable Diffusion 2

Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of the original [Stable Diffusion](https://stability.ai/blog/stable-diffusion-public-release), and it was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/).

*The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels.
These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAION’s NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).*

For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official [announcement post](https://stability.ai/blog/stable-diffusion-v2-release).

The architecture of Stable Diffusion 2 is more or less identical to the original [Stable Diffusion model](./text2img) so check out it's API documentation for how to use Stable Diffusion 2. We recommend using the [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps.

Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image:

| Task                    | Repository                                                                                                    |
|-------------------------|---------------------------------------------------------------------------------------------------------------|
| text-to-image (512x512) | [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base)             |
| text-to-image (768x768) | [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)                       |
| inpainting              | [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) |
| super-resolution        | [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)               |
| depth-to-image          | [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth)           |

Here are some examples for how to use Stable Diffusion 2 for each task:

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## Text-to-image

```py
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
import torch

repo_id = "stabilityai/stable-diffusion-2-base"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16")

pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")

prompt = "High quality photo of an astronaut riding a horse in space"
image = pipe(prompt, num_inference_steps=25).images[0]
image
```

## Inpainting

```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import load_image, make_image_grid

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url).resize((512, 512))
mask_image = load_image(mask_url).resize((512, 512))

repo_id = "stabilityai/stable-diffusion-2-inpainting"
pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16")

pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

## Super-resolution

```py
from diffusers import StableDiffusionUpscalePipeline
from diffusers.utils import load_image, make_image_grid
import torch

# load model and scheduler
model_id = "stabilityai/stable-diffusion-x4-upscaler"
pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipeline = pipeline.to("cuda")

# let's download an  image
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
low_res_img = load_image(url)
low_res_img = low_res_img.resize((128, 128))
prompt = "a white cat"
upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2)
```

## Depth-to-image

```py
import torch
from diffusers import StableDiffusionDepth2ImgPipeline
from diffusers.utils import load_image, make_image_grid

pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-depth",
    torch_dtype=torch.float16,
).to("cuda")


url = "http://images.cocodataset.org/val2017/000000039769.jpg"
init_image = load_image(url)
prompt = "two tigers"
negative_prompt = "bad, deformed, ugly, bad anotomy"
image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_2.md" />

### SDXL Turbo
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/sdxl_turbo.md

# SDXL Turbo

Stable Diffusion XL (SDXL) Turbo was proposed in [Adversarial Diffusion Distillation](https://stability.ai/research/adversarial-diffusion-distillation) by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach.

The abstract from the paper is:

*We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models.*

## Tips

- SDXL Turbo uses the exact same architecture as [SDXL](./stable_diffusion_xl), which means it also has the same API. Please refer to the [SDXL](./stable_diffusion_xl) API reference for more details.
- SDXL Turbo should disable guidance scale by setting `guidance_scale=0.0`.
- SDXL Turbo should use `timestep_spacing='trailing'` for the scheduler and use between 1 and 4 steps.
- SDXL Turbo has been trained to generate images of size 512x512.
- SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the [official model card](https://huggingface.co/stabilityai/sdxl-turbo) to learn more.

> [!TIP]
> To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the [SDXL Turbo](../../../using-diffusers/sdxl_turbo) guide.
>
> Check out the [Stability AI](https://huggingface.co/stabilityai) Hub organization for the official base and refiner model checkpoints!


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/sdxl_turbo.md" />

### Stable Video Diffusion
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/svd.md

# Stable Video Diffusion

Stable Video Diffusion was proposed in [Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets](https://hf.co/papers/2311.15127) by Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach.

The abstract from the paper is:

*We present Stable Video Diffusion - a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well-curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our base model provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at this https URL.*

> [!TIP]
> To learn how to use Stable Video Diffusion, take a look at the [Stable Video Diffusion](../../../using-diffusers/svd) guide.
>
> <br>
>
> Check out the [Stability AI](https://huggingface.co/stabilityai) Hub organization for the [base](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid) and [extended frame](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) checkpoints!

## Tips

Video generation is memory-intensive and one way to reduce your memory usage is to set `enable_forward_chunking` on the pipeline's UNet so you don't run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient.

Check out the [Text or image-to-video](../../../using-diffusers/text-img2vid) guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage.

## StableVideoDiffusionPipeline[[diffusers.StableVideoDiffusionPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableVideoDiffusionPipeline</name><anchor>diffusers.StableVideoDiffusionPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py#L147</source><parameters>[{"name": "vae", "val": ": AutoencoderKLTemporalDecoder"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "unet", "val": ": UNetSpatioTemporalConditionModel"}, {"name": "scheduler", "val": ": EulerDiscreteScheduler"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}]</parameters><paramsdesc>- **vae** (`AutoencoderKLTemporalDecoder`) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **image_encoder** ([CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection)) --
  Frozen CLIP image-encoder
  ([laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K)).
- **unet** (`UNetSpatioTemporalConditionModel`) --
  A `UNetSpatioTemporalConditionModel` to denoise the encoded image latents.
- **scheduler** ([EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline to generate video from an input image using Stable Video Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).




</div>

## StableVideoDiffusionPipelineOutput[[diffusers.pipelines.stable_video_diffusion.StableVideoDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_video_diffusion.StableVideoDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_video_diffusion.StableVideoDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py#L134</source><parameters>[{"name": "frames", "val": ": typing.Union[typing.List[typing.List[PIL.Image.Image]], numpy.ndarray, torch.Tensor]"}]</parameters><paramsdesc>- **frames** (`[List[List[PIL.Image.Image]]`, `np.ndarray`, `torch.Tensor`]) --
  List of denoised PIL images of length `batch_size` or numpy array or torch tensor of shape `(batch_size,
  num_frames, height, width, num_channels)`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Video Diffusion pipeline.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/svd.md" />

### GLIGEN (Grounded Language-to-Image Generation)
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/gligen.md

# GLIGEN (Grounded Language-to-Image Generation)

The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [StableDiffusionGLIGENPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline) and [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with [StableDiffusionGLIGENPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline), if input images are given, [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs.

The abstract from the [paper](https://huggingface.co/papers/2301.07093) is:

*Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.*

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently!
>
> If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations!

[StableDiffusionGLIGENPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline) was contributed by [Nikhil Gajendrakumar](https://github.com/nikhil-masterful) and [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) was contributed by [Nguyễn Công Tú Anh](https://github.com/tuanh123789).

## StableDiffusionGLIGENPipeline[[diffusers.StableDiffusionGLIGENPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionGLIGENPipeline</name><anchor>diffusers.StableDiffusionGLIGENPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L111</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionGLIGENPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L539</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "gligen_scheduled_sampling_beta", "val": ": float = 0.3"}, {"name": "gligen_phrases", "val": ": typing.List[str] = None"}, {"name": "gligen_boxes", "val": ": typing.List[typing.List[float]] = None"}, {"name": "gligen_inpaint_image", "val": ": typing.Optional[PIL.Image.Image] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **gligen_phrases** (`List[str]`) --
  The phrases to guide what to include in each of the regions defined by the corresponding
  `gligen_boxes`. There should only be one phrase per bounding box.
- **gligen_boxes** (`List[List[float]]`) --
  The bounding boxes that identify rectangular regions of the image that are going to be filled with the
  content described by the corresponding `gligen_phrases`. Each rectangular box is defined as a
  `List[float]` of 4 elements `[xmin, ymin, xmax, ymax]` where each value is between [0,1].
- **gligen_inpaint_image** (`PIL.Image.Image`, *optional*) --
  The input image, if provided, is inpainted with objects described by the `gligen_boxes` and
  `gligen_phrases`. Otherwise, it is treated as a generation task on a blank input image.
- **gligen_scheduled_sampling_beta** (`float`, defaults to 0.3) --
  Scheduled Sampling factor from [GLIGEN: Open-Set Grounded Text-to-Image
  Generation](https://huggingface.co/papers/2301.07093). Scheduled Sampling factor is only varied for
  scheduled sampling during inference for improved quality and controllability.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionGLIGENPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionGLIGENPipeline
>>> from diffusers.utils import load_image

>>> # Insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
...     "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> input_image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png"
... )
>>> prompt = "a birthday cake"
>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]]
>>> phrases = ["a birthday cake"]

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_inpaint_image=input_image,
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg")

>>> # Generate an image described by the prompt and
>>> # insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
...     "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage"
>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]]
>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"]

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-1-4-generation-text-box.jpg")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionGLIGENPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionGLIGENPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.StableDiffusionGLIGENPipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2206</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.StableDiffusionGLIGENPipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2220</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.StableDiffusionGLIGENPipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1150</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters><paramsdesc>- **gpu_id** (`int`, *optional*) --
  The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.
- **device** (`torch.Device` or `str`, *optional*, defaults to None) --
  The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
  automatically detect the available accelerator and use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_latents</name><anchor>diffusers.StableDiffusionGLIGENPipeline.prepare_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L490</source><parameters>[{"name": "batch_size", "val": ""}, {"name": "num_channels_latents", "val": ""}, {"name": "height", "val": ""}, {"name": "width", "val": ""}, {"name": "dtype", "val": ""}, {"name": "device", "val": ""}, {"name": "generator", "val": ""}, {"name": "latents", "val": " = None"}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_fuser</name><anchor>diffusers.StableDiffusionGLIGENPipeline.enable_fuser</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L512</source><parameters>[{"name": "enabled", "val": " = True"}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionGLIGENPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L220</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionGLIGENTextImagePipeline[[diffusers.StableDiffusionGLIGENTextImagePipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionGLIGENTextImagePipeline</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L163</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "processor", "val": ": CLIPProcessor"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection"}, {"name": "image_project", "val": ": CLIPImageProjection"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **processor** ([CLIPProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPProcessor)) --
  A `CLIPProcessor` to process reference image.
- **image_encoder** ([CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection)) --
  Frozen image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **image_project** (`CLIPImageProjection`) --
  A `CLIPImageProjection` to project image embedding into phrases embedding space.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  more details about a model's potential harms.
- **feature_extractor** ([CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor)) --
  A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN).

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L714</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "gligen_scheduled_sampling_beta", "val": ": float = 0.3"}, {"name": "gligen_phrases", "val": ": typing.List[str] = None"}, {"name": "gligen_images", "val": ": typing.List[PIL.Image.Image] = None"}, {"name": "input_phrases_mask", "val": ": typing.Union[int, typing.List[int]] = None"}, {"name": "input_images_mask", "val": ": typing.Union[int, typing.List[int]] = None"}, {"name": "gligen_boxes", "val": ": typing.List[typing.List[float]] = None"}, {"name": "gligen_inpaint_image", "val": ": typing.Optional[PIL.Image.Image] = None"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "gligen_normalize_constant", "val": ": float = 28.7"}, {"name": "clip_skip", "val": ": int = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **gligen_phrases** (`List[str]`) --
  The phrases to guide what to include in each of the regions defined by the corresponding
  `gligen_boxes`. There should only be one phrase per bounding box.
- **gligen_images** (`List[PIL.Image.Image]`) --
  The images to guide what to include in each of the regions defined by the corresponding `gligen_boxes`.
  There should only be one image per bounding box
- **input_phrases_mask** (`int` or `List[int]`) --
  pre phrases mask input defined by the correspongding `input_phrases_mask`
- **input_images_mask** (`int` or `List[int]`) --
  pre images mask input defined by the correspongding `input_images_mask`
- **gligen_boxes** (`List[List[float]]`) --
  The bounding boxes that identify rectangular regions of the image that are going to be filled with the
  content described by the corresponding `gligen_phrases`. Each rectangular box is defined as a
  `List[float]` of 4 elements `[xmin, ymin, xmax, ymax]` where each value is between [0,1].
- **gligen_inpaint_image** (`PIL.Image.Image`, *optional*) --
  The input image, if provided, is inpainted with objects described by the `gligen_boxes` and
  `gligen_phrases`. Otherwise, it is treated as a generation task on a blank input image.
- **gligen_scheduled_sampling_beta** (`float`, defaults to 0.3) --
  Scheduled Sampling factor from [GLIGEN: Open-Set Grounded Text-to-Image
  Generation](https://huggingface.co/papers/2301.07093). Scheduled Sampling factor is only varied for
  scheduled sampling during inference for improved quality and controllability.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that calls every `callback_steps` steps during inference. The function is called with the
  following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function is called. If not specified, the callback is called at
  every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **gligen_normalize_constant** (`float`, *optional*, defaults to 28.7) --
  The normalize value of the image embedding.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionGLIGENTextImagePipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionGLIGENTextImagePipeline
>>> from diffusers.utils import load_image

>>> # Insert objects described by image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> input_image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png"
... )
>>> prompt = "a backpack"
>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]]
>>> phrases = None
>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_inpaint_image=input_image,
...     gligen_boxes=boxes,
...     gligen_images=[gligen_image],
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-inpainting-text-image-box.jpg")

>>> # Generate an image described by the prompt and
>>> # insert objects described by text and image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a flower sitting on the beach"
>>> boxes = [[0.0, 0.09, 0.53, 0.76]]
>>> phrases = ["flower"]
>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_images=[gligen_image],
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-generation-text-image-box.jpg")

>>> # Generate an image described by the prompt and
>>> # transfer style described by image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a dragon flying on the sky"
>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]]  # Set `[0.0, 1.0, 0.0, 1.0]` for the style

>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
... )

>>> gligen_placeholder = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=[
...         "dragon",
...         "placeholder",
...     ],  # Can use any text instead of `placeholder` token, because we will use mask here
...     gligen_images=[
...         gligen_placeholder,
...         gligen_image,
...     ],  # Can use any image in gligen_placeholder, because we will use mask here
...     input_phrases_mask=[1, 0],  # Set 0 for the placeholder token
...     input_images_mask=[0, 1],  # Set 0 for the placeholder image
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_tiling</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.enable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2206</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_tiling</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.disable_vae_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2220</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_model_cpu_offload</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.enable_model_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1150</source><parameters>[{"name": "gpu_id", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[torch.device, str] = None"}]</parameters><paramsdesc>- **gpu_id** (`int`, *optional*) --
  The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.
- **device** (`torch.Device` or `str`, *optional*, defaults to None) --
  The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
  automatically detect the available accelerator and use.</paramsdesc><paramgroups>0</paramgroups></docstring>

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_latents</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.prepare_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L528</source><parameters>[{"name": "batch_size", "val": ""}, {"name": "num_channels_latents", "val": ""}, {"name": "height", "val": ""}, {"name": "width", "val": ""}, {"name": "dtype", "val": ""}, {"name": "device", "val": ""}, {"name": "generator", "val": ""}, {"name": "latents", "val": " = None"}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_fuser</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.enable_fuser</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L550</source><parameters>[{"name": "enabled", "val": " = True"}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>complete_mask</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.complete_mask</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L587</source><parameters>[{"name": "has_mask", "val": ""}, {"name": "max_objs", "val": ""}, {"name": "device", "val": ""}]</parameters></docstring>

Based on the input mask corresponding value `0 or 1` for each phrases and image, mask the features
corresponding to phrases and images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>crop</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.crop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L567</source><parameters>[{"name": "im", "val": ""}, {"name": "new_width", "val": ""}, {"name": "new_height", "val": ""}]</parameters></docstring>

Crop the input image to the specified dimensions.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>draw_inpaint_mask_from_boxes</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.draw_inpaint_mask_from_boxes</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L555</source><parameters>[{"name": "boxes", "val": ""}, {"name": "size", "val": ""}]</parameters></docstring>

Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided
boxes to mark regions that need to be inpainted.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L251</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_clip_feature</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.get_clip_feature</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L603</source><parameters>[{"name": "input", "val": ""}, {"name": "normalize_constant", "val": ""}, {"name": "device", "val": ""}, {"name": "is_image", "val": " = False"}]</parameters></docstring>

Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the
phrases embedding space through a projection.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_cross_attention_kwargs_with_grounded</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.get_cross_attention_kwargs_with_grounded</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L627</source><parameters>[{"name": "hidden_size", "val": ""}, {"name": "gligen_phrases", "val": ""}, {"name": "gligen_images", "val": ""}, {"name": "gligen_boxes", "val": ""}, {"name": "input_phrases_mask", "val": ""}, {"name": "input_images_mask", "val": ""}, {"name": "repeat_batch", "val": ""}, {"name": "normalize_constant", "val": ""}, {"name": "max_objs", "val": ""}, {"name": "device", "val": ""}]</parameters></docstring>

Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image
embedding, phrases embedding).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_cross_attention_kwargs_without_grounded</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.get_cross_attention_kwargs_without_grounded</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L691</source><parameters>[{"name": "hidden_size", "val": ""}, {"name": "repeat_batch", "val": ""}, {"name": "max_objs", "val": ""}, {"name": "device", "val": ""}]</parameters></docstring>

Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding,
phrases embedding) (All are zero tensor).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>target_size_center_crop</name><anchor>diffusers.StableDiffusionGLIGENTextImagePipeline.target_size_center_crop</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L578</source><parameters>[{"name": "im", "val": ""}, {"name": "new_hw", "val": ""}]</parameters></docstring>

Crop and resize the image to the target size while keeping the center.


</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/gligen.md" />

### T2I-Adapter
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/adapter.md

# T2I-Adapter

[T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.08453) by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie.

Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.

The abstract of the paper is the following:

*The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to ``dig out" the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications.*

This model was contributed by the community contributor [HimariO](https://github.com/HimariO) ❤️ .

## StableDiffusionAdapterPipeline[[diffusers.StableDiffusionAdapterPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionAdapterPipeline</name><anchor>diffusers.StableDiffusionAdapterPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py#L191</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "adapter", "val": ": typing.Union[diffusers.models.adapter.T2IAdapter, diffusers.models.adapter.MultiAdapter, typing.List[diffusers.models.adapter.T2IAdapter]]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters><paramsdesc>- **adapter** (`T2IAdapter` or `MultiAdapter` or `List[T2IAdapter]`) --
  Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a
  list, the outputs from each Adapter are added together to create one combined additional conditioning.
- **adapter_weights** (`List[float]`, *optional*, defaults to None) --
  List of floats representing the weight which will be multiply to each adapter's output before adding them
  together.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please, refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  details.
- **feature_extractor** (`CLIPImageProcessor`) --
  Model that extracts features from generated images to be used as inputs for the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter
https://huggingface.co/papers/2302.08453

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionAdapterPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py#L689</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[torch.Tensor, PIL.Image.Image, typing.List[PIL.Image.Image]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "adapter_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[List[PIL.Image.Image]]`) --
  The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the
  type is specified as `torch.Tensor`, it is passed to Adapter as is. PIL.Image.Image` can also be
  accepted as an image. The control image is automatically resized to fit the output image.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
  Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput` instead
  of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **adapter_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the adapter are multiplied by `adapter_conditioning_scale` before they are added to the
  residual in the original unet. If multiple adapters are specified in init, you can set the
  corresponding scale as a list.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput` if `return_dict` is True, otherwise a
`tuple. When returning a tuple, the first element is a list with the generated images, and the second
element is a list of `bool`s denoting whether the corresponding generated image likely represents
"not-safe-for-work" (nsfw) content, according to the `safety_checker`.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionAdapterPipeline.__call__.example">

Examples:
```py
>>> from PIL import Image
>>> from diffusers.utils import load_image
>>> import torch
>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter

>>> image = load_image(
...     "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png"
... )

>>> color_palette = image.resize((8, 8))
>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST)

>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16)
>>> pipe = StableDiffusionAdapterPipeline.from_pretrained(
...     "CompVis/stable-diffusion-v1-4",
...     adapter=adapter,
...     torch_dtype=torch.float16,
... )

>>> pipe.to("cuda")

>>> out_image = pipe(
...     "At night, glowing cubes in front of the beach",
...     image=color_palette,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionAdapterPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionAdapterPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionAdapterPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionAdapterPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionAdapterPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionAdapterPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionAdapterPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionAdapterPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionAdapterPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py#L311</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionAdapterPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py#L648</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLAdapterPipeline[[diffusers.StableDiffusionXLAdapterPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLAdapterPipeline</name><anchor>diffusers.StableDiffusionXLAdapterPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L216</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "adapter", "val": ": typing.Union[diffusers.models.adapter.T2IAdapter, diffusers.models.adapter.MultiAdapter, typing.List[diffusers.models.adapter.T2IAdapter]]"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}]</parameters><paramsdesc>- **adapter** (`T2IAdapter` or `MultiAdapter` or `List[T2IAdapter]`) --
  Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a
  list, the outputs from each Adapter are added together to create one combined additional conditioning.
- **adapter_weights** (`List[float]`, *optional*, defaults to None) --
  List of floats representing the weight which will be multiply to each adapter's output before adding them
  together.
- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **safety_checker** (`StableDiffusionSafetyChecker`) --
  Classification module that estimates whether generated images could be considered offensive or harmful.
  Please, refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for
  details.
- **feature_extractor** (`CLIPImageProcessor`) --
  Model that extracts features from generated images to be used as inputs for the `safety_checker`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter
https://huggingface.co/papers/2302.08453

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L868</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "adapter_conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "adapter_conditioning_factor", "val": ": float = 1.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor`, `PIL.Image.Image`, `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[List[PIL.Image.Image]]`) --
  The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the
  type is specified as `torch.Tensor`, it is passed to Adapter as is. PIL.Image.Image` can also be
  accepted as an image. The control image is automatically resized to fit the output image.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput`
  instead of a plain tuple.
- **callback** (`Callable`, *optional*) --
  A function that will be called every `callback_steps` steps during inference. The function will be
  called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`.
- **callback_steps** (`int`, *optional*, defaults to 1) --
  The frequency at which the `callback` function will be called. If not specified, the callback will be
  called at every step.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **adapter_conditioning_scale** (`float` or `List[float]`, *optional*, defaults to 1.0) --
  The outputs of the adapter are multiplied by `adapter_conditioning_scale` before they are added to the
  residual in the original unet. If multiple adapters are specified in init, you can set the
  corresponding scale as a list.
- **adapter_conditioning_factor** (`float`, *optional*, defaults to 1.0) --
  The fraction of timesteps for which adapter should be applied. If `adapter_conditioning_factor` is
  `0.0`, adapter is not applied at all. If `adapter_conditioning_factor` is `1.0`, adapter is applied for
  all timesteps. If `adapter_conditioning_factor` is `0.5`, adapter is applied for half of the timesteps.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLAdapterPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler
>>> from diffusers.utils import load_image

>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L")

>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0"

>>> adapter = T2IAdapter.from_pretrained(
...     "Adapter/t2iadapter",
...     subfolder="sketch_sdxl_1.0",
...     torch_dtype=torch.float16,
...     adapter_type="full_adapter_xl",
... )
>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler")

>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
...     model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler
... ).to("cuda")

>>> generator = torch.manual_seed(42)
>>> sketch_image_out = pipe(
...     prompt="a photo of a dog in real world, high quality",
...     negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality",
...     image=sketch_image,
...     generator=generator,
...     guidance_scale=7.5,
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLAdapterPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_vae_slicing</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.enable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2180</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_vae_slicing</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.disable_vae_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2193</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLAdapterPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L314</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLAdapterPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L827</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/adapter.md" />

### Text-to-(RGB, depth)
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/ldm3d_diffusion.md

# Text-to-(RGB, depth)

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

LDM3D was proposed in [LDM3D: Latent Diffusion Model for 3D](https://huggingface.co/papers/2305.10853) by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as [Stable Diffusion](./overview) which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps.

Two checkpoints are available for use:
- [ldm3d-original](https://huggingface.co/Intel/ldm3d). The original checkpoint used in the [paper](https://huggingface.co/papers/2305.10853)
- [ldm3d-4c](https://huggingface.co/Intel/ldm3d-4c). The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images.


The abstract from the paper is:

*This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at [this url](https://t.ly/tdi2).*

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

## StableDiffusionLDM3DPipeline[[diffusers.StableDiffusionLDM3DPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionLDM3DPipeline</name><anchor>diffusers.StableDiffusionLDM3DPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L180</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "safety_checker", "val": ": StableDiffusionSafetyChecker"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor"}, {"name": "image_encoder", "val": ": typing.Optional[transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection]"}, {"name": "requires_safety_checker", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionLDM3DPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L747</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 49"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor is generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*):
  Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionLDM3DPipeline.__call__.example">

Examples:
```python
>>> from diffusers import StableDiffusionLDM3DPipeline

>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> output = pipe(prompt)
>>> rgb_image, depth_image = output.rgb, output.depth
>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg")
>>> depth_image[0].save("astronaut_ldm3d_depth.png")
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionLDM3DPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L307</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionLDM3DPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L686</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## LDM3DPipelineOutput[[diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L159</source><parameters>[{"name": "rgb", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "depth", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **rgb** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **depth** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput.__call__</anchor><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Call self as a function.

</div></div>

# Upscaler

[LDM3D-VR](https://huggingface.co/papers/2311.03226) is an extended version of LDM3D.

The abstract from the paper is:
*Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods*

Two checkpoints are available for use:
- [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
- [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline.



<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md" />

### Depth-to-image
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/depth2img.md

# Depth-to-image

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
</div>

The Stable Diffusion model can also infer depth based on an image using [MiDaS](https://github.com/isl-org/MiDaS). This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the image structure.

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
>
> If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!

## StableDiffusionDepth2ImgPipeline[[diffusers.StableDiffusionDepth2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionDepth2ImgPipeline</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py#L92</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "depth_estimator", "val": ": DPTForDepthEstimation"}, {"name": "feature_extractor", "val": ": DPTImageProcessor"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel)) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer)) --
  A `CLIPTokenizer` to tokenize text.
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) --
  A `UNet2DConditionModel` to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for saving LoRA weights





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py#L634</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "depth_map", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "strength", "val": ": float = 0.8"}, {"name": "num_inference_steps", "val": ": typing.Optional[int] = 50"}, {"name": "guidance_scale", "val": ": typing.Optional[float] = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": typing.Optional[float] = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **image** (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`) --
  `Image` or tensor representing an image batch to be used as the starting point. Can accept image
  latents as `image` only if `depth_map` is not `None`.
- **depth_map** (`torch.Tensor`, *optional*) --
  Depth prediction to be used as additional conditioning for the image generation process. If not
  defined, it automatically predicts the depth with `self.depth_estimator`.
- **strength** (`float`, *optional*, defaults to 0.8) --
  Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a
  starting point and more noise is added the higher the `strength`. The number of denoising steps depends
  on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising
  process runs for the full number of iterations specified in `num_inference_steps`. A value of 1
  essentially ignores `image`.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference. This parameter is modulated by `strength`.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only
  applies to the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), and is ignored in other schedulers.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
  generation deterministic.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
  provided, text embeddings are generated from the `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
  not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined in
  [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, *optional*) --
  A function that calls at the end of each denoising steps during the inference. The function is called
  with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
  callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
  `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>[StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`</rettype><retdesc>If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images.</retdesc></docstring>

The call function to the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionDepth2ImgPipeline.__call__.example">

Examples:

```py
>>> import torch
>>> import requests
>>> from PIL import Image

>>> from diffusers import StableDiffusionDepth2ImgPipeline

>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-2-depth",
...     torch_dtype=torch.float16,
... )
>>> pipe.to("cuda")


>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> init_image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = "two tigers"
>>> n_prompt = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]
```

</ExampleCodeBlock>






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_attention_slicing</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.enable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1978</source><parameters>[{"name": "slice_size", "val": ": typing.Union[int, str, NoneType] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int`, *optional*, defaults to `"auto"`) --
  When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
  `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor
in slices to compute attention in several steps. For more than one attention head, the computation is performed
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

> [!WARNING] > ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA)
from PyTorch > 2.0 or xFormers. These attention computations are already very memory efficient so you won't
need to enable > this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious
slow downs!



<ExampleCodeBlock anchor="diffusers.StableDiffusionDepth2ImgPipeline.enable_attention_slicing.example">

Examples:

```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline

>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5",
...     torch_dtype=torch.float16,
...     use_safetensors=True,
... )

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_attention_slicing</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.disable_attention_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L2015</source><parameters>[]</parameters></docstring>

Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is
computed in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1921</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed
up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.StableDiffusionDepth2ImgPipeline.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1952</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.StableDiffusionDepth2ImgPipeline.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.StableDiffusionDepth2ImgPipeline.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionDepth2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py#L204</source><parameters>[{"name": "prompt", "val": ""}, {"name": "device", "val": ""}, {"name": "num_images_per_prompt", "val": ""}, {"name": "do_classifier_free_guidance", "val": ""}, {"name": "negative_prompt", "val": " = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **lora_scale** (`float`, *optional*) --
  A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div></div>

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</name><anchor>diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L11</source><parameters>[{"name": "images", "val": ": typing.Union[typing.List[PIL.Image.Image], numpy.ndarray]"}, {"name": "nsfw_content_detected", "val": ": typing.Optional[typing.List[bool]]"}]</parameters><paramsdesc>- **images** (`List[PIL.Image.Image]` or `np.ndarray`) --
  List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
  num_channels)`.
- **nsfw_content_detected** (`List[bool]`) --
  List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
  `None` if safety checking could not be performed.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for Stable Diffusion pipelines.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/depth2img.md" />

### Stable Diffusion XL
https://huggingface.co/docs/diffusers/main/api/pipelines/stable_diffusion/stable_diffusion_xl.md

# Stable Diffusion XL

<div class="flex flex-wrap space-x-1">
  <img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
  <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
</div>

Stable Diffusion XL (SDXL) was proposed in [SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis](https://huggingface.co/papers/2307.01952) by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach.

The abstract from the paper is:

*We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators.*

## Tips

- Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce [visual artifacts](https://github.com/huggingface/diffusers/issues/5433) because the solver becomes numerically unstable. To fix this issue, take a look at this [PR](https://github.com/huggingface/diffusers/pull/5541) which recommends for ODE/SDE solvers:
	- set `use_karras_sigmas=True` or `lu_lambdas=True` to improve image quality
	- set `euler_at_final=True` if you're using a solver with uniform step sizes (DPM++2M or DPM++2M SDE)
- Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren't as good. Anything below 512x512 is not recommended and likely won't be for default checkpoints like [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
- SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders.
- SDXL output images can be improved by making use of a refiner model in an image-to-image setting.
- SDXL offers `negative_original_size`, `negative_crops_coords_top_left`, and `negative_target_size` to negatively condition the model on image resolution and cropping parameters.

> [!TIP]
> To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the [Stable Diffusion XL](../../../using-diffusers/sdxl) guide.
>
> Check out the [Stability AI](https://huggingface.co/stabilityai) Hub organization for the official base and refiner model checkpoints!

## StableDiffusionXLPipeline[[diffusers.StableDiffusionXLPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLPipeline</name><anchor>diffusers.StableDiffusionXLPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L175</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L836</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise as determined by the discrete timesteps selected by the
  scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
  "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- **guidance_scale** (`float`, *optional*, defaults to 5.0) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` instead
  of a plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple`. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionXLPipeline

>>> pipe = StableDiffusionXLPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L288</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L771</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLImg2ImgPipeline[[diffusers.StableDiffusionXLImg2ImgPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLImg2ImgPipeline</name><anchor>diffusers.StableDiffusionXLImg2ImgPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L192</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires an `aesthetic_score` condition to be passed during inference. Also see the
  config of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLImg2ImgPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L986</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "strength", "val": ": float = 0.3"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`torch.Tensor` or `PIL.Image.Image` or `np.ndarray` or `List[torch.Tensor]` or `List[PIL.Image.Image]` or `List[np.ndarray]`) --
  The image(s) to modify with the pipeline.
- **strength** (`float`, *optional*, defaults to 0.3) --
  Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
  will be used as a starting point, adding more noise to it the larger the `strength`. The number of
  denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
  be maximum and the denoising process will run for the full number of iterations specified in
  `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. Note that in the case of
  `denoising_start` being declared as an integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refine Image
  Quality**](https://huggingface.co/docs/diffusers/using-diffusers/sdxl#refine-image-quality).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator` or `List[torch.Generator]`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **guidance_rescale** (`float`, *optional*, defaults to 0.0) --
  Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of
  [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when
  using zero terminal SNR.
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLImg2ImgPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionXLImg2ImgPipeline
>>> from diffusers.utils import load_image

>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"

>>> init_image = load_image(url).convert("RGB")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, image=init_image).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLImg2ImgPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L305</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLImg2ImgPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py#L917</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

## StableDiffusionXLInpaintPipeline[[diffusers.StableDiffusionXLInpaintPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableDiffusionXLInpaintPipeline</name><anchor>diffusers.StableDiffusionXLInpaintPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py#L219</source><parameters>[{"name": "vae", "val": ": AutoencoderKL"}, {"name": "text_encoder", "val": ": CLIPTextModel"}, {"name": "text_encoder_2", "val": ": CLIPTextModelWithProjection"}, {"name": "tokenizer", "val": ": CLIPTokenizer"}, {"name": "tokenizer_2", "val": ": CLIPTokenizer"}, {"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "scheduler", "val": ": KarrasDiffusionSchedulers"}, {"name": "image_encoder", "val": ": CLIPVisionModelWithProjection = None"}, {"name": "feature_extractor", "val": ": CLIPImageProcessor = None"}, {"name": "requires_aesthetics_score", "val": ": bool = False"}, {"name": "force_zeros_for_empty_prompt", "val": ": bool = True"}, {"name": "add_watermarker", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **vae** ([AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) --
  Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- **text_encoder** (`CLIPTextModel`) --
  Frozen text-encoder. Stable Diffusion XL uses the text portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
  the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- **text_encoder_2** (` CLIPTextModelWithProjection`) --
  Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
  [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
  specifically the
  [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
  variant.
- **tokenizer** (`CLIPTokenizer`) --
  Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **tokenizer_2** (`CLIPTokenizer`) --
  Second Tokenizer of class
  [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- **unet** ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) -- Conditional U-Net architecture to denoise the encoded image latents.
- **scheduler** ([SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin)) --
  A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
  [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler).
- **requires_aesthetics_score** (`bool`, *optional*, defaults to `"False"`) --
  Whether the `unet` requires a aesthetic_score condition to be passed during inference. Also see the config
  of `stabilityai/stable-diffusion-xl-refiner-1-0`.
- **force_zeros_for_empty_prompt** (`bool`, *optional*, defaults to `"True"`) --
  Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
  `stabilityai/stable-diffusion-xl-base-1-0`.
- **add_watermarker** (`bool`, *optional*) --
  Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
  watermark output images. If not defined, it will default to True if the package is installed, otherwise no
  watermarker will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Pipeline for text-to-image generation using Stable Diffusion XL.

This model inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

The pipeline also inherits the following loading methods:
- [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) for loading textual inversion embeddings
- [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) for loading `.ckpt` files
- [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights) for loading LoRA weights
- [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights) for saving LoRA weights
- [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter) for loading IP Adapters





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>__call__</name><anchor>diffusers.StableDiffusionXLInpaintPipeline.__call__</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py#L1091</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "mask_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None"}, {"name": "masked_image_latents", "val": ": Tensor = None"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}, {"name": "padding_mask_crop", "val": ": typing.Optional[int] = None"}, {"name": "strength", "val": ": float = 0.9999"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "timesteps", "val": ": typing.List[int] = None"}, {"name": "sigmas", "val": ": typing.List[float] = None"}, {"name": "denoising_start", "val": ": typing.Optional[float] = None"}, {"name": "denoising_end", "val": ": typing.Optional[float] = None"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "negative_prompt", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "negative_prompt_2", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "num_images_per_prompt", "val": ": typing.Optional[int] = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None"}, {"name": "latents", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "ip_adapter_image", "val": ": typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None"}, {"name": "ip_adapter_image_embeds", "val": ": typing.Optional[typing.List[torch.Tensor]] = None"}, {"name": "output_type", "val": ": typing.Optional[str] = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "original_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "target_size", "val": ": typing.Tuple[int, int] = None"}, {"name": "negative_original_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "negative_crops_coords_top_left", "val": ": typing.Tuple[int, int] = (0, 0)"}, {"name": "negative_target_size", "val": ": typing.Optional[typing.Tuple[int, int]] = None"}, {"name": "aesthetic_score", "val": ": float = 6.0"}, {"name": "negative_aesthetic_score", "val": ": float = 2.5"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}, {"name": "callback_on_step_end", "val": ": typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": typing.List[str] = ['latents']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
  instead.
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
  be masked out with `mask_image` and repainted according to `prompt`.
- **mask_image** (`PIL.Image.Image`) --
  `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
  repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
  to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
  instead of 3, so the expected shape would be `(B, H, W, 1)`.
- **height** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The height in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **width** (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor) --
  The width in pixels of the generated image. This is set to 1024 by default for the best results.
  Anything below 512 pixels won't work well for
  [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
  and checkpoints that are not specifically fine-tuned on low resolutions.
- **padding_mask_crop** (`int`, *optional*, defaults to `None`) --
  The size of margin in the crop to be applied to the image and masking. If `None`, no crop is applied to
  image and mask_image. If `padding_mask_crop` is not `None`, it will first find a rectangular region
  with the same aspect ration of the image and contains all masked area, and then expand that area based
  on `padding_mask_crop`. The image and mask_image will then be cropped based on the expanded area before
  resizing to the original image size for inpainting. This is useful when the masked area is small while
  the image is large and contain information irrelevant for inpainting, such as background.
- **strength** (`float`, *optional*, defaults to 0.9999) --
  Conceptually, indicates how much to transform the masked portion of the reference `image`. Must be
  between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the
  `strength`. The number of denoising steps depends on the amount of noise initially added. When
  `strength` is 1, added noise will be maximum and the denoising process will run for the full number of
  iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores the masked
  portion of the reference `image`. Note that in the case of `denoising_start` being declared as an
  integer, the value of `strength` will be ignored.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument
  in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is
  passed will be used. Must be in descending order.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
  their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
  will be used.
- **denoising_start** (`float`, *optional*) --
  When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be
  bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and
  it is assumed that the passed `image` is a partly denoised image. Note that when this is specified,
  strength will be ignored. The `denoising_start` parameter is particularly beneficial when this pipeline
  is integrated into a "Mixture of Denoisers" multi-pipeline setup, as detailed in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **denoising_end** (`float`, *optional*) --
  When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
  completed before it is intentionally prematurely terminated. As a result, the returned sample will
  still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be
  denoised by a successor pipeline that has `denoising_start` set to 0.8 so that it only denoises the
  final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline
  forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
  Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output).
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  Guidance scale as defined in [Classifier-Free Diffusion
  Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
  of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
  `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
  the text `prompt`, usually at the expense of lower image quality.
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **ip_adapter_image** -- (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters.
- **ip_adapter_image_embeds** (`List[torch.Tensor]`, *optional*) --
  Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of
  IP-adapters. Each element should be a tensor of shape `(batch_size, num_images, emb_dim)`. It should
  contain the negative image embedding if `do_classifier_free_guidance` is set to `True`. If not
  provided, embeddings are computed from the `ip_adapter_image` input argument.
- **num_images_per_prompt** (`int`, *optional*, defaults to 1) --
  The number of images to generate per prompt.
- **eta** (`float`, *optional*, defaults to 0.0) --
  Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only
  applies to [schedulers.DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler), will be ignored for others.
- **generator** (`torch.Generator`, *optional*) --
  One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
  to make generation deterministic.
- **latents** (`torch.Tensor`, *optional*) --
  Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
  generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
  tensor will be generated by sampling using the supplied random `generator`.
- **output_type** (`str`, *optional*, defaults to `"pil"`) --
  The output format of the generate image. Choose between
  [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) instead of a
  plain tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
  `original_size` defaults to `(height, width)` if not specified. Part of SDXL's micro-conditioning as
  explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
  `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
  `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  For most cases, `target_size` should be set to the desired height and width of the generated image. If
  not specified it will default to `(height, width)`. Part of SDXL's micro-conditioning as explained in
  section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_original_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a specific image resolution. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_crops_coords_top_left** (`Tuple[int]`, *optional*, defaults to (0, 0)) --
  To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
  micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **negative_target_size** (`Tuple[int]`, *optional*, defaults to (1024, 1024)) --
  To negatively condition the generation process based on a target image resolution. It should be as same
  as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
  information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
- **aesthetic_score** (`float`, *optional*, defaults to 6.0) --
  Used to simulate an aesthetic score of the generated image by influencing the positive text condition.
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **negative_aesthetic_score** (`float`, *optional*, defaults to 2.5) --
  Part of SDXL's micro-conditioning as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). Can be used to
  simulate an aesthetic score of the generated image by influencing the negative text condition.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.
- **callback_on_step_end** (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*) --
  A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of
  each denoising step during the inference. with the following arguments: `callback_on_step_end(self:
  DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a
  list of all tensors as specified by `callback_on_step_end_tensor_inputs`.
- **callback_on_step_end_tensor_inputs** (`List`, *optional*) --
  The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
  will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
  `._callback_tensor_inputs` attribute of your pipeline class.</paramsdesc><paramgroups>0</paramgroups><rettype>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` or `tuple`</rettype><retdesc>`~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput` if `return_dict` is True, otherwise a
`tuple. `tuple. When returning a tuple, the first element is a list with the generated images.</retdesc></docstring>

Function invoked when calling the pipeline for generation.



<ExampleCodeBlock anchor="diffusers.StableDiffusionXLInpaintPipeline.__call__.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionXLInpaintPipeline
>>> from diffusers.utils import load_image

>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     torch_dtype=torch.float16,
...     variant="fp16",
...     use_safetensors=True,
... )
>>> pipe.to("cuda")

>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

>>> init_image = load_image(img_url).convert("RGB")
>>> mask_image = load_image(mask_url).convert("RGB")

>>> prompt = "A majestic tiger sitting on a bench"
>>> image = pipe(
...     prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80
... ).images[0]
```

</ExampleCodeBlock>







</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>encode_prompt</name><anchor>diffusers.StableDiffusionXLInpaintPipeline.encode_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py#L409</source><parameters>[{"name": "prompt", "val": ": str"}, {"name": "prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "device", "val": ": typing.Optional[torch.device] = None"}, {"name": "num_images_per_prompt", "val": ": int = 1"}, {"name": "do_classifier_free_guidance", "val": ": bool = True"}, {"name": "negative_prompt", "val": ": typing.Optional[str] = None"}, {"name": "negative_prompt_2", "val": ": typing.Optional[str] = None"}, {"name": "prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "negative_pooled_prompt_embeds", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "lora_scale", "val": ": typing.Optional[float] = None"}, {"name": "clip_skip", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **prompt** (`str` or `List[str]`, *optional*) --
  prompt to be encoded
- **prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
  used in both text-encoders
- **device** -- (`torch.device`):
  torch device
- **num_images_per_prompt** (`int`) --
  number of images that should be generated per prompt
- **do_classifier_free_guidance** (`bool`) --
  whether to use classifier free guidance or not
- **negative_prompt** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation. If not defined, one has to pass
  `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
  less than `1`).
- **negative_prompt_2** (`str` or `List[str]`, *optional*) --
  The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
  `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- **prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
  provided, text embeddings will be generated from `prompt` input argument.
- **negative_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
  argument.
- **pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
  If not provided, pooled text embeddings will be generated from `prompt` input argument.
- **negative_pooled_prompt_embeds** (`torch.Tensor`, *optional*) --
  Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
  weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
  input argument.
- **lora_scale** (`float`, *optional*) --
  A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- **clip_skip** (`int`, *optional*) --
  Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
  the output of the pre-final layer will be used for computing the prompt embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

Encodes the prompt into text encoder hidden states.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_guidance_scale_embedding</name><anchor>diffusers.StableDiffusionXLInpaintPipeline.get_guidance_scale_embedding</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py#L1022</source><parameters>[{"name": "w", "val": ": Tensor"}, {"name": "embedding_dim", "val": ": int = 512"}, {"name": "dtype", "val": ": dtype = torch.float32"}]</parameters><paramsdesc>- **w** (`torch.Tensor`) --
  Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
- **embedding_dim** (`int`, *optional*, defaults to 512) --
  Dimension of the embeddings to generate.
- **dtype** (`torch.dtype`, *optional*, defaults to `torch.float32`) --
  Data type of the generated embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>Embedding vectors with shape `(len(w), embedding_dim)`.</retdesc></docstring>

See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.md" />

### Pipeline
https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline.md

# Pipeline

## ModularPipeline[[diffusers.ModularPipeline]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ModularPipeline</name><anchor>diffusers.ModularPipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1418</source><parameters>[{"name": "blocks", "val": ": typing.Optional[diffusers.modular_pipelines.modular_pipeline.ModularPipelineBlocks] = None"}, {"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "components_manager", "val": ": typing.Optional[diffusers.modular_pipelines.components_manager.ComponentsManager] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **blocks** -- ModularPipelineBlocks, the blocks to be used in the pipeline</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for all Modular pipelines.

> [!WARNING] > This is an experimental feature and is likely to change in the future.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.ModularPipeline.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1599</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType]"}, {"name": "trust_remote_code", "val": ": typing.Optional[bool] = None"}, {"name": "components_manager", "val": ": typing.Optional[diffusers.modular_pipelines.components_manager.ComponentsManager] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, optional) --
  Path to a pretrained pipeline configuration. It will first try to load config from
  `modular_model_index.json`, then fallback to `model_index.json` for compatibility with standard
  non-modular repositories. If the repo does not contain any pipeline config, it will be set to None
  during initialization.
- **trust_remote_code** (`bool`, optional) --
  Whether to trust remote code when loading the pipeline, need to be set to True if you want to create
  pipeline blocks based on the custom code in `pretrained_model_name_or_path`
- **components_manager** (`ComponentsManager`, optional) --
  ComponentsManager instance for managing multiple component cross different pipelines and apply
  offloading strategies.
- **collection** (`str`, optional) --`
  Collection name for organizing components in the ComponentsManager.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load a ModularPipeline from a huggingface hub repo.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_component_spec</name><anchor>diffusers.ModularPipeline.get_component_spec</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1942</source><parameters>[{"name": "name", "val": ": str"}]</parameters><retdesc>- a copy of the ComponentSpec object for the given component name</retdesc></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_components</name><anchor>diffusers.ModularPipeline.load_components</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L2080</source><parameters>[{"name": "names", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **names** -- List of component names to load. If None, will load all components with
  default_creation_method == "from_pretrained". If provided as a list or string, will load only the
  specified components.
- ****kwargs** -- additional kwargs to be passed to `from_pretrained()`.Can be:
  - a single value to be applied to all components to be loaded, e.g. torch_dtype=torch.bfloat16
  - a dict, e.g. torch_dtype={"unet": torch.bfloat16, "default": torch.float32}
  - if potentially override ComponentSpec if passed a different loading field in kwargs, e.g. `repo`,
    `variant`, `revision`, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load selected components from specs.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>register_components</name><anchor>diffusers.ModularPipeline.register_components</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1737</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- ****kwargs** -- Keyword arguments where keys are component names and values are component objects.
  E.g., register_components(unet=unet_model, text_encoder=encoder_model)</paramsdesc><paramgroups>0</paramgroups></docstring>

Register components with their corresponding specifications.

This method is responsible for:
1. Sets component objects as attributes on the loader (e.g., self.unet = unet)
2. Updates the config dict, which will be saved as `modular_model_index.json` during `save_pretrained` (only
   for from_pretrained components)
3. Adds components to the component manager if one is attached (only for from_pretrained components)

This method is called when:
- Components are first initialized in __init__:
  - from_pretrained components not loaded during __init__ so they are registered as None;
  - non from_pretrained components are created during __init__ and registered as the object itself
- Components are updated with the `update_components()` method: e.g. loader.update_components(unet=unet) or
  loader.update_components(guider=guider_spec)
- (from_pretrained) Components are loaded with the `load_components()` method: e.g.
  loader.load_components(names=["unet"]) or loader.load_components() to load all default components



Notes:
- When registering None for a component, it sets attribute to None but still syncs specs with the config
  dict, which will be saved as `modular_model_index.json` during `save_pretrained`
- component_specs are updated to match the new component outside of this method, e.g. in
  `update_components()` method


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>diffusers.ModularPipeline.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1693</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Path to the directory where the pipeline will be saved.
- **push_to_hub** (`bool`, optional) --
  Whether to push the pipeline to the huggingface hub.
- ****kwargs** -- Additional arguments passed to `save_config()` method</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the pipeline to a directory. It does not save components, you need to save them separately.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to</name><anchor>diffusers.ModularPipeline.to</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L2154</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **dtype** (`torch.dtype`, *optional*) --
  Returns a pipeline with the specified
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- **device** (`torch.Device`, *optional*) --
  Returns a pipeline with the specified
  [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- **silence_dtype_warnings** (`str`, *optional*, defaults to `False`) --
  Whether to omit warnings if the target `dtype` is not compatible with the target `device`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline)</rettype><retdesc>The pipeline converted to specified `dtype` and/or `dtype`.</retdesc></docstring>

Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the
arguments of `self.to(*args, **kwargs).`

> [!TIP] > If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is.
Otherwise, > the returned pipeline is a copy of self with the desired torch.dtype and torch.device.


Here are the ways to call `to`:

- `to(dtype, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)
- `to(device, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified
  [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)
- `to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the
  specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) and
  [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>update_components</name><anchor>diffusers.ModularPipeline.update_components</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1949</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- ****kwargs** -- Component objects, ComponentSpec objects, or configuration values to update:
  - Component objects: Only supports components we can extract specs using
    `ComponentSpec.from_component()` method i.e. components created with ComponentSpec.load() or
    ConfigMixin subclasses that aren't nn.Modules (e.g., `unet=new_unet, text_encoder=new_encoder`)
  - ComponentSpec objects: Only supports default_creation_method == "from_config", will call create()
    method to create a new component (e.g., `guider=ComponentSpec(name="guider",
    type_hint=ClassifierFreeGuidance, config={...}, default_creation_method="from_config")`)
  - Configuration values: Simple values to update configuration settings (e.g.,
    `requires_safety_checker=False`)</paramsdesc><paramgroups>0</paramgroups><raises>- ``ValueError`` -- If a component object is not supported in ComponentSpec.from_component() method:
  - nn.Module components without a valid `_diffusers_load_id` attribute
  - Non-ConfigMixin components without a valid `_diffusers_load_id` attribute</raises><raisederrors>``ValueError``</raisederrors></docstring>

Update components and configuration values and specs after the pipeline has been instantiated.

This method allows you to:
1. Replace existing components with new ones (e.g., updating `self.unet` or `self.text_encoder`)
2. Update configuration values (e.g., changing `self.requires_safety_checker` flag)

In addition to updating the components and configuration values as pipeline attributes, the method also
updates:
- the corresponding specs in `_component_specs` and `_config_specs`
- the `config` dict, which will be saved as `modular_model_index.json` during `save_pretrained`







<ExampleCodeBlock anchor="diffusers.ModularPipeline.update_components.example">

Examples:
```python
# Update multiple components at once
pipeline.update_components(unet=new_unet_model, text_encoder=new_text_encoder)

# Update configuration values
pipeline.update_components(requires_safety_checker=False)

# Update both components and configs together
pipeline.update_components(unet=new_unet_model, requires_safety_checker=False)

# Update with ComponentSpec objects (from_config only)
pipeline.update_components(
    guider=ComponentSpec(
        name="guider",
        type_hint=ClassifierFreeGuidance,
        config={"guidance_scale": 5.0},
        default_creation_method="from_config",
    )
)
```

</ExampleCodeBlock>

Notes:
- Components with trained weights must be created using ComponentSpec.load(). If the component has not been
  shared in huggingface hub and you don't have loading specs, you can upload it using `push_to_hub()`
- ConfigMixin objects without weights (e.g., schedulers, guiders) can be passed directly
- ComponentSpec objects with default_creation_method="from_pretrained" are not supported in
  update_components()


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/pipeline.md" />

### Components and configs
https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_components.md

# Components and configs

## ComponentSpec[[diffusers.ComponentSpec]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ComponentSpec</name><anchor>diffusers.ComponentSpec</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L71</source><parameters>[{"name": "name", "val": ": typing.Optional[str] = None"}, {"name": "type_hint", "val": ": typing.Optional[typing.Type] = None"}, {"name": "description", "val": ": typing.Optional[str] = None"}, {"name": "config", "val": ": typing.Optional[diffusers.configuration_utils.FrozenDict] = None"}, {"name": "repo", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = ''"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "revision", "val": ": typing.Optional[str] = None"}, {"name": "default_creation_method", "val": ": typing.Literal['from_config', 'from_pretrained'] = 'from_pretrained'"}]</parameters><paramsdesc>- **name** -- Name of the component
- **type_hint** -- Type of the component (e.g. UNet2DConditionModel)
- **description** -- Optional description of the component
- **config** -- Optional config dict for __init__ creation
- **repo** -- Optional repo path for from_pretrained creation
- **subfolder** -- Optional subfolder in repo
- **variant** -- Optional variant in repo
- **revision** -- Optional revision in repo
- **default_creation_method** -- Preferred creation method - "from_config" or "from_pretrained"</paramsdesc><paramgroups>0</paramgroups></docstring>
Specification for a pipeline component.

A component can be created in two ways:
1. From scratch using __init__ with a config dict
2. using `from_pretrained`





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>create</name><anchor>diffusers.ComponentSpec.create</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L232</source><parameters>[{"name": "config", "val": ": typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any], NoneType] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>
Create component using from_config with config.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>decode_load_id</name><anchor>diffusers.ComponentSpec.decode_load_id</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L194</source><parameters>[{"name": "load_id", "val": ": str"}]</parameters><paramsdesc>- **load_id** -- The load_id string to decode, format: "repo|subfolder|variant|revision"
  where None values are represented as "null"</paramsdesc><paramgroups>0</paramgroups><retdesc>Dict mapping loading field names to their values. e.g. {
"repo": "path/to/repo", "subfolder": "subfolder", "variant": "variant", "revision": "revision"
} If a segment value is "null", it's replaced with None. Returns None if load_id is "null" (indicating
component not created with `load` method).</retdesc></docstring>

Decode a load_id string back into a dictionary of loading fields and values.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_component</name><anchor>diffusers.ComponentSpec.from_component</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L115</source><parameters>[{"name": "name", "val": ": str"}, {"name": "component", "val": ": typing.Any"}]</parameters><paramsdesc>- **name** -- Name of the component
- **component** -- Component object to create spec from</paramsdesc><paramgroups>0</paramgroups><retdesc>ComponentSpec object</retdesc><raises>- ``ValueError`` -- If component is not supported (e.g. nn.Module without load_id, non-ConfigMixin)</raises><raisederrors>``ValueError``</raisederrors></docstring>
Create a ComponentSpec from a Component.

Currently supports:
- Components created with `ComponentSpec.load()` method
- Components that are ConfigMixin subclasses but not nn.Modules (e.g. schedulers, guiders)










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load</name><anchor>diffusers.ComponentSpec.load</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L260</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>
Load component using from_pretrained.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>loading_fields</name><anchor>diffusers.ComponentSpec.loading_fields</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L175</source><parameters>[]</parameters></docstring>

Return the names of all loading‐related fields (i.e. those whose field.metadata["loading"] is True).


</div></div>

## ConfigSpec[[diffusers.modular_pipelines.ConfigSpec]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.ConfigSpec</name><anchor>diffusers.modular_pipelines.ConfigSpec</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L298</source><parameters>[{"name": "name", "val": ": str"}, {"name": "default", "val": ": typing.Any"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters></docstring>
Specification for a pipeline configuration parameter.

</div>

## ComponentsManager[[diffusers.ComponentsManager]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ComponentsManager</name><anchor>diffusers.ComponentsManager</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L281</source><parameters>[]</parameters></docstring>

A central registry and management system for model components across multiple pipelines.

[ComponentsManager](/docs/diffusers/main/en/api/modular_diffusers/pipeline_components#diffusers.ComponentsManager) provides a unified way to register, track, and reuse model components (like UNet, VAE, text
encoders, etc.) across different modular pipelines. It includes features for duplicate detection, memory
management, and component organization.

> [!WARNING] > This is an experimental feature and is likely to change in the future.

<ExampleCodeBlock anchor="diffusers.ComponentsManager.example">

Example:
```python
from diffusers import ComponentsManager

# Create a components manager
cm = ComponentsManager()

# Add components
cm.add("unet", unet_model, collection="sdxl")
cm.add("vae", vae_model, collection="sdxl")

# Enable auto offloading
cm.enable_auto_cpu_offload()

# Retrieve components
unet = cm.get_one(name="unet", collection="sdxl")
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add</name><anchor>diffusers.ComponentsManager.add</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L374</source><parameters>[{"name": "name", "val": ": str"}, {"name": "component", "val": ": typing.Any"}, {"name": "collection", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **name** (str) -- The name of the component
- **component** (Any) -- The component to add
- **collection** (Optional[str]) -- The collection to add the component to</paramsdesc><paramgroups>0</paramgroups><rettype>str</rettype><retdesc>The unique component ID, which is generated as "{name}_{id(component)}" where
id(component) is Python's built-in unique identifier for the object</retdesc></docstring>

Add a component to the ComponentsManager.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_auto_cpu_offload</name><anchor>diffusers.ComponentsManager.disable_auto_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L729</source><parameters>[]</parameters></docstring>

Disable automatic CPU offloading for all components.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_auto_cpu_offload</name><anchor>diffusers.ComponentsManager.enable_auto_cpu_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L682</source><parameters>[{"name": "device", "val": ": typing.Union[str, int, torch.device] = None"}, {"name": "memory_reserve_margin", "val": " = '3GB'"}]</parameters><paramsdesc>- **device** (Union[str, int, torch.device]) -- The execution device where models are moved for forward passes
- **memory_reserve_margin** (str) -- The memory reserve margin to use, default is 3GB. This is the amount of
  memory to keep free on the device to avoid running out of memory during model
  execution (e.g., for intermediate activations, gradients, etc.)</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable automatic CPU offloading for all components.

The algorithm works as follows:
1. All models start on CPU by default
2. When a model's forward pass is called, it's moved to the execution device
3. If there's insufficient memory, other models on the device are moved back to CPU
4. The system tries to offload the smallest combination of models that frees enough memory
5. Models stay on the execution device until another model needs memory and forces them off




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_components_by_ids</name><anchor>diffusers.ComponentsManager.get_components_by_ids</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L1023</source><parameters>[{"name": "ids", "val": ": typing.List[str]"}, {"name": "return_dict_with_names", "val": ": typing.Optional[bool] = True"}]</parameters><paramsdesc>- **ids** (List[str]) --
  List of component IDs
- **return_dict_with_names** (Optional[bool]) --
  Whether to return a dictionary with component names as keys:</paramsdesc><paramgroups>0</paramgroups><rettype>Dict[str, Any]</rettype><retdesc>Dictionary of components.
- If return_dict_with_names=True, keys are component names.
- If return_dict_with_names=False, keys are component IDs.</retdesc><raises>- ``ValueError`` -- If duplicate component names are found in the search results when return_dict_with_names=True</raises><raisederrors>``ValueError``</raisederrors></docstring>

Get components by a list of IDs.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_components_by_names</name><anchor>diffusers.ComponentsManager.get_components_by_names</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L1056</source><parameters>[{"name": "names", "val": ": typing.List[str]"}, {"name": "collection", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **names** (List[str]) -- List of component names
- **collection** (Optional[str]) -- Optional collection to filter by</paramsdesc><paramgroups>0</paramgroups><rettype>Dict[str, Any]</rettype><retdesc>Dictionary of components with component names as keys</retdesc><raises>- ``ValueError`` -- If duplicate component names are found in the search results</raises><raisederrors>``ValueError``</raisederrors></docstring>

Get components by a list of names, optionally filtered by collection.












</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_ids</name><anchor>diffusers.ComponentsManager.get_ids</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L1005</source><parameters>[{"name": "names", "val": ": typing.Union[str, typing.List[str]] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **names** (Union[str, List[str]]) -- List of component names
- **collection** (Optional[str]) -- Optional collection to filter by</paramsdesc><paramgroups>0</paramgroups><rettype>List[str]</rettype><retdesc>List of component IDs</retdesc></docstring>

Get component IDs by a list of names, optionally filtered by collection.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_model_info</name><anchor>diffusers.ComponentsManager.get_model_info</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L746</source><parameters>[{"name": "component_id", "val": ": str"}, {"name": "fields", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}]</parameters><paramsdesc>- **component_id** (str) -- Name of the component to get info for
- **fields** (Optional[Union[str, List[str]]]) --
  Field(s) to return. Can be a string for single field or list of fields. If None, uses the
  available_info_fields setting.</paramsdesc><paramgroups>0</paramgroups><retdesc>Dictionary containing requested component metadata. If fields is specified, returns only those fields.
Otherwise, returns all fields.</retdesc></docstring>
Get comprehensive information about a component.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_one</name><anchor>diffusers.ComponentsManager.get_one</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L960</source><parameters>[{"name": "component_id", "val": ": typing.Optional[str] = None"}, {"name": "name", "val": ": typing.Optional[str] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "load_id", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **component_id** (Optional[str]) -- Optional component ID to get
- **name** (Optional[str]) -- Component name or pattern
- **collection** (Optional[str]) -- Optional collection to filter by
- **load_id** (Optional[str]) -- Optional load_id to filter by</paramsdesc><paramgroups>0</paramgroups><retdesc>A single component</retdesc><raises>- ``ValueError`` -- If no components match or multiple components match</raises><raisederrors>``ValueError``</raisederrors></docstring>

Get a single component by either:
- searching name (pattern matching), collection, or load_id.
- passing in a component_id
Raises an error if multiple components match or none are found.










</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>remove</name><anchor>diffusers.ComponentsManager.remove</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L463</source><parameters>[{"name": "component_id", "val": ": str = None"}]</parameters><paramsdesc>- **component_id** (str) -- The ID of the component to remove</paramsdesc><paramgroups>0</paramgroups></docstring>

Remove a component from the ComponentsManager.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>remove_from_collection</name><anchor>diffusers.ComponentsManager.remove_from_collection</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L445</source><parameters>[{"name": "component_id", "val": ": str"}, {"name": "collection", "val": ": str"}]</parameters></docstring>

Remove a component from a collection.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>search_components</name><anchor>diffusers.ComponentsManager.search_components</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/components_manager.py#L496</source><parameters>[{"name": "names", "val": ": typing.Optional[str] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}, {"name": "load_id", "val": ": typing.Optional[str] = None"}, {"name": "return_dict_with_names", "val": ": bool = True"}]</parameters><paramsdesc>- **names** -- Component name(s) or pattern(s)
  Patterns:
  - "unet" : match any component with base name "unet" (e.g., unet_123abc)
  - "!unet" : everything except components with base name "unet"
  - "unet*" : anything with base name starting with "unet"
  - "!unet*" : anything with base name NOT starting with "unet"
  - "*unet*" : anything with base name containing "unet"
  - "!*unet*" : anything with base name NOT containing "unet"
  - "refiner|vae|unet" : anything with base name exactly matching "refiner", "vae", or "unet"
  - "!refiner|vae|unet" : anything with base name NOT exactly matching "refiner", "vae", or "unet"
  - "unet*|vae*" : anything with base name starting with "unet" OR starting with "vae"
- **collection** -- Optional collection to filter by
- **load_id** -- Optional load_id to filter by
- **return_dict_with_names** --
  If True, returns a dictionary with component names as keys, throw an error if
  multiple components with the same name are found If False, returns a dictionary
  with component IDs as keys</paramsdesc><paramgroups>0</paramgroups><retdesc>Dictionary mapping component names to components if return_dict_with_names=True, or a dictionary mapping
component IDs to components if return_dict_with_names=False</retdesc></docstring>

Search components by name with simple pattern matching. Optionally filter by collection or load_id.






</div></div>

## InsertableDict[[diffusers.modular_pipelines.InsertableDict]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.InsertableDict</name><anchor>diffusers.modular_pipelines.InsertableDict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline_utils.py#L33</source><parameters>""</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/pipeline_components.md" />

### Pipeline states
https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_states.md

# Pipeline states

## PipelineState[[diffusers.modular_pipelines.PipelineState]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.PipelineState</name><anchor>diffusers.modular_pipelines.PipelineState</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L69</source><parameters>[{"name": "values", "val": ": typing.Dict[str, typing.Any] = <factory>"}, {"name": "kwargs_mapping", "val": ": typing.Dict[str, typing.List[str]] = <factory>"}]</parameters></docstring>

`PipelineState` stores the state of a pipeline. It is used to pass data between pipeline blocks.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get</name><anchor>diffusers.modular_pipelines.PipelineState.get</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L94</source><parameters>[{"name": "keys", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "default", "val": ": typing.Any = None"}]</parameters><paramsdesc>- **keys** (Union[str, List[str]]) -- Key or list of keys for the values
- **default** (Any) -- The default value to return if not found</paramsdesc><paramgroups>0</paramgroups><rettype>Union[Any, Dict[str, Any]]</rettype><retdesc>Single value if keys is str, dictionary of values if keys is list</retdesc></docstring>

Get one or multiple values from the pipeline state.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_by_kwargs</name><anchor>diffusers.modular_pipelines.PipelineState.get_by_kwargs</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L109</source><parameters>[{"name": "kwargs_type", "val": ": str"}]</parameters><paramsdesc>- **kwargs_type** (str) -- The kwargs_type to filter by</paramsdesc><paramgroups>0</paramgroups><rettype>Dict[str, Any]</rettype><retdesc>Dictionary of values with matching kwargs_type</retdesc></docstring>

Get all values with matching kwargs_type.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set</name><anchor>diffusers.modular_pipelines.PipelineState.set</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L77</source><parameters>[{"name": "key", "val": ": str"}, {"name": "value", "val": ": typing.Any"}, {"name": "kwargs_type", "val": ": str = None"}]</parameters><paramsdesc>- **key** (str) -- The key for the value
- **value** (Any) -- The value to store
- **kwargs_type** (str) -- The kwargs_type with which the value is associated</paramsdesc><paramgroups>0</paramgroups></docstring>

Add a value to the pipeline state.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>to_dict</name><anchor>diffusers.modular_pipelines.PipelineState.to_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L122</source><parameters>[]</parameters></docstring>

Convert PipelineState to a dictionary.


</div></div>

## BlockState[[diffusers.modular_pipelines.BlockState]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.BlockState</name><anchor>diffusers.modular_pipelines.BlockState</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L153</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters></docstring>

Container for block state data with attribute access and formatted representation.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>as_dict</name><anchor>diffusers.modular_pipelines.BlockState.as_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L170</source><parameters>[]</parameters><rettype>Dict[str, Any]</rettype><retdesc>Dictionary containing all attributes of the BlockState</retdesc></docstring>

Convert BlockState to a dictionary.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/pipeline_states.md" />

### Pipeline blocks
https://huggingface.co/docs/diffusers/main/api/modular_diffusers/pipeline_blocks.md

# Pipeline blocks

## ModularPipelineBlocks[[diffusers.ModularPipelineBlocks]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ModularPipelineBlocks</name><anchor>diffusers.ModularPipelineBlocks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L224</source><parameters>[]</parameters></docstring>

Base class for all Pipeline Blocks: PipelineBlock, AutoPipelineBlocks, SequentialPipelineBlocks,
LoopSequentialPipelineBlocks

[ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks) provides method to load and save the definition of pipeline blocks.

> [!WARNING] > This is an experimental feature and is likely to change in the future.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>combine_inputs</name><anchor>diffusers.ModularPipelineBlocks.combine_inputs</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L429</source><parameters>[{"name": "*named_input_lists", "val": ": typing.List[typing.Tuple[str, typing.List[diffusers.modular_pipelines.modular_pipeline_utils.InputParam]]]"}]</parameters><paramsdesc>- **named_input_lists** -- List of tuples containing (block_name, input_param_list) pairs</paramsdesc><paramgroups>0</paramgroups><rettype>List[InputParam]</rettype><retdesc>Combined list of unique InputParam objects</retdesc></docstring>

Combines multiple lists of InputParam objects from different blocks. For duplicate inputs, updates only if
current default value is None and new default value is not None. Warns if multiple non-None default values
exist for the same input.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>combine_outputs</name><anchor>diffusers.ModularPipelineBlocks.combine_outputs</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L472</source><parameters>[{"name": "*named_output_lists", "val": ": typing.List[typing.Tuple[str, typing.List[diffusers.modular_pipelines.modular_pipeline_utils.OutputParam]]]"}]</parameters><paramsdesc>- **named_output_lists** -- List of tuples containing (block_name, output_param_list) pairs</paramsdesc><paramgroups>0</paramgroups><rettype>List[OutputParam]</rettype><retdesc>Combined list of unique OutputParam objects</retdesc></docstring>

Combines multiple lists of OutputParam objects from different blocks. For duplicate outputs, keeps the first
occurrence of each output name.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_block_state</name><anchor>diffusers.ModularPipelineBlocks.get_block_state</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L372</source><parameters>[{"name": "state", "val": ": PipelineState"}]</parameters></docstring>
Get all inputs and intermediates in one dictionary

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>init_pipeline</name><anchor>diffusers.ModularPipelineBlocks.init_pipeline</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L351</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "components_manager", "val": ": typing.Optional[diffusers.modular_pipelines.components_manager.ComponentsManager] = None"}, {"name": "collection", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

create a ModularPipeline, optionally accept modular_repo to load from hub.


</div></div>

## SequentialPipelineBlocks[[diffusers.modular_pipelines.SequentialPipelineBlocks]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.SequentialPipelineBlocks</name><anchor>diffusers.modular_pipelines.SequentialPipelineBlocks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L776</source><parameters>[]</parameters><paramsdesc>- **block_classes** -- List of block classes to be used
- **block_names** -- List of prefixes for each block</paramsdesc><paramgroups>0</paramgroups></docstring>

A Pipeline Blocks that combines multiple pipeline block classes into one. When called, it will call each block in
sequence.

This class inherits from [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks). Check the superclass documentation for the generic methods the
library implements for all the pipeline blocks (such as loading or saving etc.)

> [!WARNING] > This is an experimental feature and is likely to change in the future.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_blocks_dict</name><anchor>diffusers.modular_pipelines.SequentialPipelineBlocks.from_blocks_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L820</source><parameters>[{"name": "blocks_dict", "val": ": typing.Dict[str, typing.Any]"}, {"name": "description", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **blocks_dict** -- Dictionary mapping block names to block classes or instances</paramsdesc><paramgroups>0</paramgroups><retdesc>A new SequentialPipelineBlocks instance</retdesc></docstring>
Creates a SequentialPipelineBlocks instance from a dictionary of blocks.






</div></div>

## LoopSequentialPipelineBlocks[[diffusers.modular_pipelines.LoopSequentialPipelineBlocks]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.LoopSequentialPipelineBlocks</name><anchor>diffusers.modular_pipelines.LoopSequentialPipelineBlocks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1131</source><parameters>[]</parameters><paramsdesc>- **block_classes** -- List of block classes to be used
- **block_names** -- List of prefixes for each block</paramsdesc><paramgroups>0</paramgroups></docstring>

A Pipeline blocks that combines multiple pipeline block classes into a For Loop. When called, it will call each
block in sequence.

This class inherits from [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks). Check the superclass documentation for the generic methods the
library implements for all the pipeline blocks (such as loading or saving etc.)

> [!WARNING] > This is an experimental feature and is likely to change in the future.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_blocks_dict</name><anchor>diffusers.modular_pipelines.LoopSequentialPipelineBlocks.from_blocks_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L1283</source><parameters>[{"name": "blocks_dict", "val": ": typing.Dict[str, typing.Any]"}]</parameters><paramsdesc>- **blocks_dict** -- Dictionary mapping block names to block instances</paramsdesc><paramgroups>0</paramgroups><retdesc>A new LoopSequentialPipelineBlocks instance</retdesc></docstring>

Creates a LoopSequentialPipelineBlocks instance from a dictionary of blocks.






</div></div>

## AutoPipelineBlocks[[diffusers.modular_pipelines.AutoPipelineBlocks]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.modular_pipelines.AutoPipelineBlocks</name><anchor>diffusers.modular_pipelines.AutoPipelineBlocks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/modular_pipelines/modular_pipeline.py#L519</source><parameters>[]</parameters><paramsdesc>- **block_classes** -- List of block classes to be used
- **block_names** -- List of prefixes for each block
- **block_trigger_inputs** -- List of input names that trigger specific blocks, with None for default</paramsdesc><paramgroups>0</paramgroups></docstring>

A Pipeline Blocks that automatically selects a block to run based on the inputs.

This class inherits from [ModularPipelineBlocks](/docs/diffusers/main/en/api/modular_diffusers/pipeline_blocks#diffusers.ModularPipelineBlocks). Check the superclass documentation for the generic methods the
library implements for all the pipeline blocks (such as loading or saving etc.)

> [!WARNING] > This is an experimental feature and is likely to change in the future.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/pipeline_blocks.md" />

### Guiders
https://huggingface.co/docs/diffusers/main/api/modular_diffusers/guiders.md

# Guiders

Guiders are components in Modular Diffusers that control how the diffusion process is guided during generation. They implement various guidance techniques to improve generation quality and control.

## BaseGuidance[[diffusers.guiders.guider_utils.BaseGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.guiders.guider_utils.BaseGuidance</name><anchor>diffusers.guiders.guider_utils.BaseGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L36</source><parameters>[{"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters></docstring>
Base class providing the skeleton for implementing guidance techniques.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>cleanup_models</name><anchor>diffusers.guiders.guider_utils.BaseGuidance.cleanup_models</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L119</source><parameters>[{"name": "denoiser", "val": ": Module"}]</parameters></docstring>

Cleans up the models for the guidance technique after a given batch of data. This method should be overridden
in subclasses to implement specific model cleanup logic. It is useful for removing any hooks or other stateful
modifications made during `prepare_models`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.guiders.guider_utils.BaseGuidance.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L204</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the guider configuration
    saved with `~BaseGuidance.save_pretrained`.
- **subfolder** (`str`, *optional*) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  Whether kwargs that are not consumed by the Python class should be returned or not.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only(`bool`,** *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a guider from a pre-defined JSON configuration file in a local directory or Hub repository.



> [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`. You can also activate the special >
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a >
firewalled environment.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>prepare_models</name><anchor>diffusers.guiders.guider_utils.BaseGuidance.prepare_models</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L112</source><parameters>[{"name": "denoiser", "val": ": Module"}]</parameters></docstring>

Prepares the models for the guidance technique on a given batch of data. This method should be overridden in
subclasses to implement specific model preparation logic.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>diffusers.guiders.guider_utils.BaseGuidance.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L265</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory where the configuration JSON file will be saved (will be created if it does not exist).
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
  repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
  namespace).
- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save a guider configuration object to a directory so that it can be reloaded using the
`~BaseGuidance.from_pretrained` class method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_input_fields</name><anchor>diffusers.guiders.guider_utils.BaseGuidance.set_input_fields</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/guider_utils.py#L75</source><parameters>[{"name": "**kwargs", "val": ": typing.Dict[str, typing.Union[str, typing.Tuple[str, str]]]"}]</parameters><paramsdesc>- ****kwargs** (`Dict[str, Union[str, Tuple[str, str]]]`) --
  A dictionary where the keys are the names of the fields that will be used to store the data once it is
  prepared with `prepare_inputs`. The values can be either a string or a tuple of length 2, which is used
  to look up the required data provided for preparation.

  If a string is provided, it will be used as the conditional data (or unconditional if used with a
  guidance method that requires it). If a tuple of length 2 is provided, the first element must be the
  conditional data identifier and the second element must be the unconditional data identifier or None.

  Example:
```
data = {"prompt_embeds": <some tensor>, "negative_prompt_embeds": <some tensor>, "latents": <some tensor>}

BaseGuidance.set_input_fields(
    latents="latents",
    prompt_embeds=("prompt_embeds", "negative_prompt_embeds"),
)
```</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the input fields for the guidance technique. The input fields are used to specify the names of the returned
attributes containing the prepared data after `prepare_inputs` is called. The prepared data is obtained from
the values of the provided keyword arguments to this method.




</div></div>

## ClassifierFreeGuidance[[diffusers.ClassifierFreeGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ClassifierFreeGuidance</name><anchor>diffusers.ClassifierFreeGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/classifier_free_guidance.py#L28</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.0`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `1.0`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Classifier-free guidance (CFG): https://huggingface.co/papers/2207.12598

CFG is a technique used to improve generation quality and condition-following in diffusion models. It works by
jointly training a model on both conditional and unconditional data, and using a weighted sum of the two during
inference. This allows the model to tradeoff between generation quality and sample diversity. The original paper
proposes scaling and shifting the conditional distribution based on the difference between conditional and
unconditional predictions. [x_pred = x_cond + scale * (x_cond - x_uncond)]

Diffusers implemented the scaling and shifting on the unconditional prediction instead based on the [Imagen
paper](https://huggingface.co/papers/2205.11487), which is equivalent to what the original paper proposed in
theory. [x_pred = x_uncond + scale * (x_cond - x_uncond)]

The intution behind the original formulation can be thought of as moving the conditional distribution estimates
further away from the unconditional distribution estimates, while the diffusers-native implementation can be
thought of as moving the unconditional distribution towards the conditional distribution estimates to get rid of
the unconditional predictions (usually negative features like "bad quality, bad anotomy, watermarks", etc.)

The `use_original_formulation` argument can be set to `True` to use the original CFG formulation mentioned in the
paper. By default, we use the diffusers-native implementation that has been in the codebase for a long time.




</div>

## ClassifierFreeZeroStarGuidance[[diffusers.ClassifierFreeZeroStarGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ClassifierFreeZeroStarGuidance</name><anchor>diffusers.ClassifierFreeZeroStarGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/classifier_free_zero_star_guidance.py#L28</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "zero_init_steps", "val": ": int = 1"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **zero_init_steps** (`int`, defaults to `1`) --
  The number of inference steps for which the noise predictions are zeroed out (see Section 4.2).
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Classifier-free Zero* (CFG-Zero*): https://huggingface.co/papers/2503.18886

This is an implementation of the Classifier-Free Zero* guidance technique, which is a variant of classifier-free
guidance. It proposes zero initialization of the noise predictions for the first few steps of the diffusion
process, and also introduces an optimal rescaling factor for the noise predictions, which can help in improving the
quality of generated images.

The authors of the paper suggest setting zero initialization in the first 4% of the inference steps.




</div>

## SkipLayerGuidance[[diffusers.SkipLayerGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkipLayerGuidance</name><anchor>diffusers.SkipLayerGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/skip_layer_guidance.py#L30</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "skip_layer_guidance_scale", "val": ": float = 2.8"}, {"name": "skip_layer_guidance_start", "val": ": float = 0.01"}, {"name": "skip_layer_guidance_stop", "val": ": float = 0.2"}, {"name": "skip_layer_guidance_layers", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "skip_layer_config", "val": ": typing.Union[diffusers.hooks.layer_skip.LayerSkipConfig, typing.List[diffusers.hooks.layer_skip.LayerSkipConfig], typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **skip_layer_guidance_scale** (`float`, defaults to `2.8`) --
  The scale parameter for skip layer guidance. Anatomy and structure coherence may improve with higher
  values, but it may also lead to overexposure and saturation.
- **skip_layer_guidance_start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which skip layer guidance starts.
- **skip_layer_guidance_stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which skip layer guidance stops.
- **skip_layer_guidance_layers** (`int` or `List[int]`, *optional*) --
  The layer indices to apply skip layer guidance to. Can be a single integer or a list of integers. If not
  provided, `skip_layer_config` must be provided. The recommended values are `[7, 8, 9]` for Stable Diffusion
  3.5 Medium.
- **skip_layer_config** (`LayerSkipConfig` or `List[LayerSkipConfig]`, *optional*) --
  The configuration for the skip layer guidance. Can be a single `LayerSkipConfig` or a list of
  `LayerSkipConfig`. If not provided, `skip_layer_guidance_layers` must be provided.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Skip Layer Guidance (SLG): https://github.com/Stability-AI/sd3.5

Spatio-Temporal Guidance (STG): https://huggingface.co/papers/2411.18664

SLG was introduced by StabilityAI for improving structure and anotomy coherence in generated images. It works by
skipping the forward pass of specified transformer blocks during the denoising process on an additional conditional
batch of data, apart from the conditional and unconditional batches already used in CFG
([~guiders.classifier_free_guidance.ClassifierFreeGuidance]), and then scaling and shifting the CFG predictions
based on the difference between conditional without skipping and conditional with skipping predictions.

The intution behind SLG can be thought of as moving the CFG predicted distribution estimates further away from
worse versions of the conditional distribution estimates (because skipping layers is equivalent to using a worse
version of the model for the conditional prediction).

STG is an improvement and follow-up work combining ideas from SLG, PAG and similar techniques for improving
generation quality in video diffusion models.

Additional reading:
- [Guiding a Diffusion Model with a Bad Version of Itself](https://huggingface.co/papers/2406.02507)

The values for `skip_layer_guidance_scale`, `skip_layer_guidance_start`, and `skip_layer_guidance_stop` are
defaulted to the recommendations by StabilityAI for Stable Diffusion 3.5 Medium.




</div>

## SmoothedEnergyGuidance[[diffusers.SmoothedEnergyGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SmoothedEnergyGuidance</name><anchor>diffusers.SmoothedEnergyGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/smoothed_energy_guidance.py#L30</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "seg_guidance_scale", "val": ": float = 2.8"}, {"name": "seg_blur_sigma", "val": ": float = 9999999.0"}, {"name": "seg_blur_threshold_inf", "val": ": float = 9999.0"}, {"name": "seg_guidance_start", "val": ": float = 0.0"}, {"name": "seg_guidance_stop", "val": ": float = 1.0"}, {"name": "seg_guidance_layers", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "seg_guidance_config", "val": ": typing.Union[diffusers.hooks.smoothed_energy_guidance_utils.SmoothedEnergyGuidanceConfig, typing.List[diffusers.hooks.smoothed_energy_guidance_utils.SmoothedEnergyGuidanceConfig]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **seg_guidance_scale** (`float`, defaults to `3.0`) --
  The scale parameter for smoothed energy guidance. Anatomy and structure coherence may improve with higher
  values, but it may also lead to overexposure and saturation.
- **seg_blur_sigma** (`float`, defaults to `9999999.0`) --
  The amount by which we blur the attention weights. Setting this value greater than 9999.0 results in
  infinite blur, which means uniform queries. Controlling it exponentially is empirically effective.
- **seg_blur_threshold_inf** (`float`, defaults to `9999.0`) --
  The threshold above which the blur is considered infinite.
- **seg_guidance_start** (`float`, defaults to `0.0`) --
  The fraction of the total number of denoising steps after which smoothed energy guidance starts.
- **seg_guidance_stop** (`float`, defaults to `1.0`) --
  The fraction of the total number of denoising steps after which smoothed energy guidance stops.
- **seg_guidance_layers** (`int` or `List[int]`, *optional*) --
  The layer indices to apply smoothed energy guidance to. Can be a single integer or a list of integers. If
  not provided, `seg_guidance_config` must be provided. The recommended values are `[7, 8, 9]` for Stable
  Diffusion 3.5 Medium.
- **seg_guidance_config** (`SmoothedEnergyGuidanceConfig` or `List[SmoothedEnergyGuidanceConfig]`, *optional*) --
  The configuration for the smoothed energy layer guidance. Can be a single `SmoothedEnergyGuidanceConfig` or
  a list of `SmoothedEnergyGuidanceConfig`. If not provided, `seg_guidance_layers` must be provided.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Smoothed Energy Guidance (SEG): https://huggingface.co/papers/2408.00760

SEG is only supported as an experimental prototype feature for now, so the implementation may be modified in the
future without warning or guarantee of reproducibility. This implementation assumes:
- Generated images are square (height == width)
- The model does not combine different modalities together (e.g., text and image latent streams are not combined
  together such as Flux)




</div>

## PerturbedAttentionGuidance[[diffusers.PerturbedAttentionGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PerturbedAttentionGuidance</name><anchor>diffusers.PerturbedAttentionGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/perturbed_attention_guidance.py#L34</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "perturbed_guidance_scale", "val": ": float = 2.8"}, {"name": "perturbed_guidance_start", "val": ": float = 0.01"}, {"name": "perturbed_guidance_stop", "val": ": float = 0.2"}, {"name": "perturbed_guidance_layers", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "perturbed_guidance_config", "val": ": typing.Union[diffusers.hooks.layer_skip.LayerSkipConfig, typing.List[diffusers.hooks.layer_skip.LayerSkipConfig], typing.Dict[str, typing.Any]] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **perturbed_guidance_scale** (`float`, defaults to `2.8`) --
  The scale parameter for perturbed attention guidance.
- **perturbed_guidance_start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which perturbed attention guidance starts.
- **perturbed_guidance_stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which perturbed attention guidance stops.
- **perturbed_guidance_layers** (`int` or `List[int]`, *optional*) --
  The layer indices to apply perturbed attention guidance to. Can be a single integer or a list of integers.
  If not provided, `perturbed_guidance_config` must be provided.
- **perturbed_guidance_config** (`LayerSkipConfig` or `List[LayerSkipConfig]`, *optional*) --
  The configuration for the perturbed attention guidance. Can be a single `LayerSkipConfig` or a list of
  `LayerSkipConfig`. If not provided, `perturbed_guidance_layers` must be provided.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.01`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `0.2`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Perturbed Attention Guidance (PAG): https://huggingface.co/papers/2403.17377

The intution behind PAG can be thought of as moving the CFG predicted distribution estimates further away from
worse versions of the conditional distribution estimates. PAG was one of the first techniques to introduce the idea
of using a worse version of the trained model for better guiding itself in the denoising process. It perturbs the
attention scores of the latent stream by replacing the score matrix with an identity matrix for selectively chosen
layers.

Additional reading:
- [Guiding a Diffusion Model with a Bad Version of Itself](https://huggingface.co/papers/2406.02507)

PAG is implemented with similar implementation to SkipLayerGuidance due to overlap in the configuration parameters
and implementation details.




</div>

## AdaptiveProjectedGuidance[[diffusers.AdaptiveProjectedGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AdaptiveProjectedGuidance</name><anchor>diffusers.AdaptiveProjectedGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/adaptive_projected_guidance.py#L28</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "adaptive_projected_guidance_momentum", "val": ": typing.Optional[float] = None"}, {"name": "adaptive_projected_guidance_rescale", "val": ": float = 15.0"}, {"name": "eta", "val": ": float = 1.0"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **adaptive_projected_guidance_momentum** (`float`, defaults to `None`) --
  The momentum parameter for the adaptive projected guidance. Disabled if set to `None`.
- **adaptive_projected_guidance_rescale** (`float`, defaults to `15.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.0`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `1.0`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Adaptive Projected Guidance (APG): https://huggingface.co/papers/2410.02416




</div>

## AutoGuidance[[diffusers.AutoGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoGuidance</name><anchor>diffusers.AutoGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/auto_guidance.py#L30</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "auto_guidance_layers", "val": ": typing.Union[int, typing.List[int], NoneType] = None"}, {"name": "auto_guidance_config", "val": ": typing.Union[diffusers.hooks.layer_skip.LayerSkipConfig, typing.List[diffusers.hooks.layer_skip.LayerSkipConfig], typing.Dict[str, typing.Any]] = None"}, {"name": "dropout", "val": ": typing.Optional[float] = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **auto_guidance_layers** (`int` or `List[int]`, *optional*) --
  The layer indices to apply skip layer guidance to. Can be a single integer or a list of integers. If not
  provided, `skip_layer_config` must be provided.
- **auto_guidance_config** (`LayerSkipConfig` or `List[LayerSkipConfig]`, *optional*) --
  The configuration for the skip layer guidance. Can be a single `LayerSkipConfig` or a list of
  `LayerSkipConfig`. If not provided, `skip_layer_guidance_layers` must be provided.
- **dropout** (`float`, *optional*) --
  The dropout probability for autoguidance on the enabled skip layers (either with `auto_guidance_layers` or
  `auto_guidance_config`). If not provided, the dropout probability will be set to 1.0.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.0`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `1.0`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

AutoGuidance: https://huggingface.co/papers/2406.02507




</div>

## TangentialClassifierFreeGuidance[[diffusers.TangentialClassifierFreeGuidance]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TangentialClassifierFreeGuidance</name><anchor>diffusers.TangentialClassifierFreeGuidance</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/guiders/tangential_classifier_free_guidance.py#L28</source><parameters>[{"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "use_original_formulation", "val": ": bool = False"}, {"name": "start", "val": ": float = 0.0"}, {"name": "stop", "val": ": float = 1.0"}]</parameters><paramsdesc>- **guidance_scale** (`float`, defaults to `7.5`) --
  The scale parameter for classifier-free guidance. Higher values result in stronger conditioning on the text
  prompt, while lower values allow for more freedom in generation. Higher values may lead to saturation and
  deterioration of image quality.
- **guidance_rescale** (`float`, defaults to `0.0`) --
  The rescale factor applied to the noise predictions. This is used to improve image quality and fix
  overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are
  Flawed](https://huggingface.co/papers/2305.08891).
- **use_original_formulation** (`bool`, defaults to `False`) --
  Whether to use the original formulation of classifier-free guidance as proposed in the paper. By default,
  we use the diffusers-native implementation that has been in the codebase for a long time. See
  [~guiders.classifier_free_guidance.ClassifierFreeGuidance] for more details.
- **start** (`float`, defaults to `0.0`) --
  The fraction of the total number of denoising steps after which guidance starts.
- **stop** (`float`, defaults to `1.0`) --
  The fraction of the total number of denoising steps after which guidance stops.</paramsdesc><paramgroups>0</paramgroups></docstring>

Tangential Classifier Free Guidance (TCFG): https://huggingface.co/papers/2503.18137




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/modular_diffusers/guiders.md" />

### ConsisIDTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/consisid_transformer3d.md

# ConsisIDTransformer3DModel

A Diffusion Transformer model for 3D data from [ConsisID](https://github.com/PKU-YuanGroup/ConsisID) was introduced in [Identity-Preserving Text-to-Video Generation by Frequency Decomposition](https://huggingface.co/papers/2411.17440) by Peking University & University of Rochester & etc.

The model can be loaded with the following code snippet.

```python
from diffusers import ConsisIDTransformer3DModel

transformer = ConsisIDTransformer3DModel.from_pretrained("BestWishYsh/ConsisID-preview", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
```

## ConsisIDTransformer3DModel[[diffusers.ConsisIDTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ConsisIDTransformer3DModel</name><anchor>diffusers.ConsisIDTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/consisid_transformer_3d.py#L351</source><parameters>[{"name": "num_attention_heads", "val": ": int = 30"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": typing.Optional[int] = 16"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "num_layers", "val": ": int = 30"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_width", "val": ": int = 90"}, {"name": "sample_height", "val": ": int = 60"}, {"name": "sample_frames", "val": ": int = 49"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "temporal_compression_ratio", "val": ": int = 4"}, {"name": "max_text_seq_length", "val": ": int = 226"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "timestep_activation_fn", "val": ": str = 'silu'"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "spatial_interpolation_scale", "val": ": float = 1.875"}, {"name": "temporal_interpolation_scale", "val": ": float = 1.0"}, {"name": "use_rotary_positional_embeddings", "val": ": bool = False"}, {"name": "use_learned_positional_embeddings", "val": ": bool = False"}, {"name": "is_train_face", "val": ": bool = False"}, {"name": "is_kps", "val": ": bool = False"}, {"name": "cross_attn_interval", "val": ": int = 2"}, {"name": "cross_attn_dim_head", "val": ": int = 128"}, {"name": "cross_attn_num_heads", "val": ": int = 16"}, {"name": "LFE_id_dim", "val": ": int = 1280"}, {"name": "LFE_vit_dim", "val": ": int = 1024"}, {"name": "LFE_depth", "val": ": int = 10"}, {"name": "LFE_dim_head", "val": ": int = 64"}, {"name": "LFE_num_heads", "val": ": int = 16"}, {"name": "LFE_num_id_token", "val": ": int = 5"}, {"name": "LFE_num_querie", "val": ": int = 32"}, {"name": "LFE_output_dim", "val": ": int = 2048"}, {"name": "LFE_ff_mult", "val": ": int = 4"}, {"name": "LFE_num_scale", "val": ": int = 5"}, {"name": "local_face_scale", "val": ": float = 1.0"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, defaults to `30`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `16`) --
  The number of channels in the output.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **time_embed_dim** (`int`, defaults to `512`) --
  Output dimension of timestep embeddings.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **num_layers** (`int`, defaults to `30`) --
  The number of layers of Transformer blocks to use.
- **dropout** (`float`, defaults to `0.0`) --
  The dropout probability to use.
- **attention_bias** (`bool`, defaults to `True`) --
  Whether to use bias in the attention projection layers.
- **sample_width** (`int`, defaults to `90`) --
  The width of the input latents.
- **sample_height** (`int`, defaults to `60`) --
  The height of the input latents.
- **sample_frames** (`int`, defaults to `49`) --
  The number of frames in the input latents. Note that this parameter was incorrectly initialized to 49
  instead of 13 because ConsisID processed 13 latent frames at once in its default and recommended settings,
  but cannot be changed to the correct value to ensure backwards compatibility. To create a transformer with
  K latent frames, the correct value to pass here would be: ((K - 1) * temporal_compression_ratio + 1).
- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **temporal_compression_ratio** (`int`, defaults to `4`) --
  The compression ratio across the temporal dimension. See documentation for `sample_frames`.
- **max_text_seq_length** (`int`, defaults to `226`) --
  The maximum sequence length of the input text embeddings.
- **activation_fn** (`str`, defaults to `"gelu-approximate"`) --
  Activation function to use in feed-forward.
- **timestep_activation_fn** (`str`, defaults to `"silu"`) --
  Activation function to use when generating the timestep embeddings.
- **norm_elementwise_affine** (`bool`, defaults to `True`) --
  Whether to use elementwise affine in normalization layers.
- **norm_eps** (`float`, defaults to `1e-5`) --
  The epsilon value to use in normalization layers.
- **spatial_interpolation_scale** (`float`, defaults to `1.875`) --
  Scaling factor to apply in 3D positional embeddings across spatial dimensions.
- **temporal_interpolation_scale** (`float`, defaults to `1.0`) --
  Scaling factor to apply in 3D positional embeddings across temporal dimensions.
- **is_train_face** (`bool`, defaults to `False`) --
  Whether to use enable the identity-preserving module during the training process. When set to `True`, the
  model will focus on identity-preserving tasks.
- **is_kps** (`bool`, defaults to `False`) --
  Whether to enable keypoint for global facial extractor. If `True`, keypoints will be in the model.
- **cross_attn_interval** (`int`, defaults to `2`) --
  The interval between cross-attention layers in the Transformer architecture. A larger value may reduce the
  frequency of cross-attention computations, which can help reduce computational overhead.
- **cross_attn_dim_head** (`int`, optional, defaults to `128`) --
  The dimensionality of each attention head in the cross-attention layers of the Transformer architecture. A
  larger value increases the capacity to attend to more complex patterns, but also increases memory and
  computation costs.
- **cross_attn_num_heads** (`int`, optional, defaults to `16`) --
  The number of attention heads in the cross-attention layers. More heads allow for more parallel attention
  mechanisms, capturing diverse relationships between different components of the input, but can also
  increase computational requirements.
- **LFE_id_dim** (`int`, optional, defaults to `1280`) --
  The dimensionality of the identity vector used in the Local Facial Extractor (LFE). This vector represents
  the identity features of a face, which are important for tasks like face recognition and identity
  preservation across different frames.
- **LFE_vit_dim** (`int`, optional, defaults to `1024`) --
  The dimension of the vision transformer (ViT) output used in the Local Facial Extractor (LFE). This value
  dictates the size of the transformer-generated feature vectors that will be processed for facial feature
  extraction.
- **LFE_depth** (`int`, optional, defaults to `10`) --
  The number of layers in the Local Facial Extractor (LFE). Increasing the depth allows the model to capture
  more complex representations of facial features, but also increases the computational load.
- **LFE_dim_head** (`int`, optional, defaults to `64`) --
  The dimensionality of each attention head in the Local Facial Extractor (LFE). This parameter affects how
  finely the model can process and focus on different parts of the facial features during the extraction
  process.
- **LFE_num_heads** (`int`, optional, defaults to `16`) --
  The number of attention heads in the Local Facial Extractor (LFE). More heads can improve the model's
  ability to capture diverse facial features, but at the cost of increased computational complexity.
- **LFE_num_id_token** (`int`, optional, defaults to `5`) --
  The number of identity tokens used in the Local Facial Extractor (LFE). This defines how many
  identity-related tokens the model will process to ensure face identity preservation during feature
  extraction.
- **LFE_num_querie** (`int`, optional, defaults to `32`) --
  The number of query tokens used in the Local Facial Extractor (LFE). These tokens are used to capture
  high-frequency face-related information that aids in accurate facial feature extraction.
- **LFE_output_dim** (`int`, optional, defaults to `2048`) --
  The output dimension of the Local Facial Extractor (LFE). This dimension determines the size of the feature
  vectors produced by the LFE module, which will be used for subsequent tasks such as face recognition or
  tracking.
- **LFE_ff_mult** (`int`, optional, defaults to `4`) --
  The multiplication factor applied to the feed-forward network's hidden layer size in the Local Facial
  Extractor (LFE). A higher value increases the model's capacity to learn more complex facial feature
  transformations, but also increases the computation and memory requirements.
- **LFE_num_scale** (`int`, optional, defaults to `5`) --
  The number of different scales visual feature. A higher value increases the model's capacity to learn more
  complex facial feature transformations, but also increases the computation and memory requirements.
- **local_face_scale** (`float`, defaults to `1.0`) --
  A scaling factor used to adjust the importance of local facial features in the model. This can influence
  how strongly the model focuses on high frequency face-related content.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data in [ConsisID](https://github.com/PKU-YuanGroup/ConsisID).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.ConsisIDTransformer3DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/consisid_transformer_3d.py#L649</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/consisid_transformer3d.md" />

### AutoencoderOobleck
https://huggingface.co/docs/diffusers/main/api/models/autoencoder_oobleck.md

# AutoencoderOobleck

The Oobleck variational autoencoder (VAE) model with KL loss was introduced in [Stability-AI/stable-audio-tools](https://github.com/Stability-AI/stable-audio-tools) and [Stable Audio Open](https://huggingface.co/papers/2407.14358) by Stability AI. The model is used in 🤗 Diffusers to encode audio waveforms into latents and to decode latent representations into audio waveforms.

The abstract from the paper is:

*Open generative models are vitally important for the community, allowing for fine-tunes and serving as baselines when presenting new models. However, most current text-to-audio models are private and not accessible for artists and researchers to build upon. Here we describe the architecture and training process of a new open-weights text-to-audio model trained with Creative Commons data. Our evaluation shows that the model's performance is competitive with the state-of-the-art across various metrics. Notably, the reported FDopenl3 results (measuring the realism of the generations) showcase its potential for high-quality stereo sound synthesis at 44.1kHz.*

## AutoencoderOobleck[[diffusers.AutoencoderOobleck]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderOobleck</name><anchor>diffusers.AutoencoderOobleck</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L294</source><parameters>[{"name": "encoder_hidden_size", "val": " = 128"}, {"name": "downsampling_ratios", "val": " = [2, 4, 4, 8, 8]"}, {"name": "channel_multiples", "val": " = [1, 2, 4, 8, 16]"}, {"name": "decoder_channels", "val": " = 128"}, {"name": "decoder_input_channels", "val": " = 64"}, {"name": "audio_channels", "val": " = 2"}, {"name": "sampling_rate", "val": " = 44100"}]</parameters><paramsdesc>- **encoder_hidden_size** (`int`, *optional*, defaults to 128) --
  Intermediate representation dimension for the encoder.
- **downsampling_ratios** (`List[int]`, *optional*, defaults to `[2, 4, 4, 8, 8]`) --
  Ratios for downsampling in the encoder. These are used in reverse order for upsampling in the decoder.
- **channel_multiples** (`List[int]`, *optional*, defaults to `[1, 2, 4, 8, 16]`) --
  Multiples used to determine the hidden sizes of the hidden layers.
- **decoder_channels** (`int`, *optional*, defaults to 128) --
  Intermediate representation dimension for the decoder.
- **decoder_input_channels** (`int`, *optional*, defaults to 64) --
  Input dimension for the decoder. Corresponds to the latent dimension.
- **audio_channels** (`int`, *optional*, defaults to 2) --
  Number of channels in the audio data. Either 1 for mono or 2 for stereo.
- **sampling_rate** (`int`, *optional*, defaults to 44100) --
  The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).</paramsdesc><paramgroups>0</paramgroups></docstring>

An autoencoder for encoding waveforms into latents and decoding latent representations into waveforms. First
introduced in Stable Audio.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderOobleck.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderOobleck.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderOobleck.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L366</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderOobleck.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L359</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderOobleck.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L439</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `OobleckDecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div></div>

## OobleckDecoderOutput[[diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput</name><anchor>diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L202</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, audio_channels, sequence_length)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

## OobleckDecoderOutput[[diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput</name><anchor>diffusers.models.autoencoders.autoencoder_oobleck.OobleckDecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L202</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, audio_channels, sequence_length)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

## AutoencoderOobleckOutput[[diffusers.models.autoencoders.autoencoder_oobleck.AutoencoderOobleckOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.autoencoder_oobleck.AutoencoderOobleckOutput</name><anchor>diffusers.models.autoencoders.autoencoder_oobleck.AutoencoderOobleckOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_oobleck.py#L187</source><parameters>[{"name": "latent_dist", "val": ": OobleckDiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`OobleckDiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and standard deviation of
  `OobleckDiagonalGaussianDistribution`. `OobleckDiagonalGaussianDistribution` allows for sampling latents
  from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderOobleck encoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoder_oobleck.md" />

### EasyAnimateTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/easyanimate_transformer3d.md

# EasyAnimateTransformer3DModel

A Diffusion Transformer model for 3D data from [EasyAnimate](https://github.com/aigc-apps/EasyAnimate) was introduced by Alibaba PAI.

The model can be loaded with the following code snippet.

```python
from diffusers import EasyAnimateTransformer3DModel

transformer = EasyAnimateTransformer3DModel.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
```

## EasyAnimateTransformer3DModel[[diffusers.EasyAnimateTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EasyAnimateTransformer3DModel</name><anchor>diffusers.EasyAnimateTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_easyanimate.py#L318</source><parameters>[{"name": "num_attention_heads", "val": ": int = 48"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "sample_width", "val": ": int = 90"}, {"name": "sample_height", "val": ": int = 60"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "timestep_activation_fn", "val": ": str = 'silu'"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "num_layers", "val": ": int = 48"}, {"name": "mmdit_layers", "val": ": int = 48"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "add_norm_text_encoder", "val": ": bool = False"}, {"name": "text_embed_dim", "val": ": int = 3584"}, {"name": "text_embed_dim_t5", "val": ": int = None"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "time_position_encoding_type", "val": ": str = '3d_rope'"}, {"name": "after_norm", "val": " = False"}, {"name": "resize_inpaint_mask_directly", "val": ": bool = True"}, {"name": "enable_text_attention_mask", "val": ": bool = True"}, {"name": "add_noise_in_inpaint_model", "val": ": bool = True"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, defaults to `48`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `16`) --
  The number of channels in the output.
- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **sample_width** (`int`, defaults to `90`) --
  The width of the input latents.
- **sample_height** (`int`, defaults to `60`) --
  The height of the input latents.
- **activation_fn** (`str`, defaults to `"gelu-approximate"`) --
  Activation function to use in feed-forward.
- **timestep_activation_fn** (`str`, defaults to `"silu"`) --
  Activation function to use when generating the timestep embeddings.
- **num_layers** (`int`, defaults to `30`) --
  The number of layers of Transformer blocks to use.
- **mmdit_layers** (`int`, defaults to `1000`) --
  The number of layers of Multi Modal Transformer blocks to use.
- **dropout** (`float`, defaults to `0.0`) --
  The dropout probability to use.
- **time_embed_dim** (`int`, defaults to `512`) --
  Output dimension of timestep embeddings.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **norm_eps** (`float`, defaults to `1e-5`) --
  The epsilon value to use in normalization layers.
- **norm_elementwise_affine** (`bool`, defaults to `True`) --
  Whether to use elementwise affine in normalization layers.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **time_position_encoding_type** (`str`, defaults to `3d_rope`) --
  Type of time position encoding.
- **after_norm** (`bool`, defaults to `False`) --
  Flag to apply normalization after.
- **resize_inpaint_mask_directly** (`bool`, defaults to `True`) --
  Flag to resize inpaint mask directly.
- **enable_text_attention_mask** (`bool`, defaults to `True`) --
  Flag to enable text attention mask.
- **add_noise_in_inpaint_model** (`bool`, defaults to `False`) --
  Flag to add noise in inpaint model.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data in [EasyAnimate](https://github.com/aigc-apps/EasyAnimate).




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/easyanimate_transformer3d.md" />

### UNet1DModel
https://huggingface.co/docs/diffusers/main/api/models/unet.md

# UNet1DModel

The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 1D UNet model.

The abstract from the paper is:

*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*

## UNet1DModel[[diffusers.UNet1DModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UNet1DModel</name><anchor>diffusers.UNet1DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_1d.py#L41</source><parameters>[{"name": "sample_size", "val": ": int = 65536"}, {"name": "sample_rate", "val": ": typing.Optional[int] = None"}, {"name": "in_channels", "val": ": int = 2"}, {"name": "out_channels", "val": ": int = 2"}, {"name": "extra_in_channels", "val": ": int = 0"}, {"name": "time_embedding_type", "val": ": str = 'fourier'"}, {"name": "time_embedding_dim", "val": ": typing.Optional[int] = None"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "use_timestep_embedding", "val": ": bool = False"}, {"name": "freq_shift", "val": ": float = 0.0"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D')"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip')"}, {"name": "mid_block_type", "val": ": typing.Tuple[str] = 'UNetMidBlock1D'"}, {"name": "out_block_type", "val": ": str = None"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (32, 32, 64)"}, {"name": "act_fn", "val": ": str = None"}, {"name": "norm_num_groups", "val": ": int = 8"}, {"name": "layers_per_block", "val": ": int = 1"}, {"name": "downsample_each_block", "val": ": bool = False"}]</parameters><paramsdesc>- **sample_size** (`int`, *optional*) -- Default length of sample. Should be adaptable at runtime.
- **in_channels** (`int`, *optional*, defaults to 2) -- Number of channels in the input sample.
- **out_channels** (`int`, *optional*, defaults to 2) -- Number of channels in the output.
- **extra_in_channels** (`int`, *optional*, defaults to 0) --
  Number of additional channels to be added to the input of the first down block. Useful for cases where the
  input data has more channels than what the model was initially designed for.
- **time_embedding_type** (`str`, *optional*, defaults to `"fourier"`) -- Type of time embedding to use.
- **freq_shift** (`float`, *optional*, defaults to 0.0) -- Frequency shift for Fourier time embedding.
- **flip_sin_to_cos** (`bool`, *optional*, defaults to `False`) --
  Whether to flip sin to cos for Fourier time embedding.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")`) --
  Tuple of downsample block types.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")`) --
  Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(32, 32, 64)`) --
  Tuple of block output channels.
- **mid_block_type** (`str`, *optional*, defaults to `"UNetMidBlock1D"`) -- Block type for middle of UNet.
- **out_block_type** (`str`, *optional*, defaults to `None`) -- Optional output processing block of UNet.
- **act_fn** (`str`, *optional*, defaults to `None`) -- Optional activation function in UNet blocks.
- **norm_num_groups** (`int`, *optional*, defaults to 8) -- The number of groups for normalization.
- **layers_per_block** (`int`, *optional*, defaults to 1) -- The number of layers per block.
- **downsample_each_block** (`int`, *optional*, defaults to `False`) --
  Experimental feature for using a UNet without upsampling.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.UNet1DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_1d.py#L206</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch_size, num_channels, sample_size)`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet1DOutput](/docs/diffusers/main/en/api/models/unet#diffusers.models.unets.unet_1d.UNet1DOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[UNet1DOutput](/docs/diffusers/main/en/api/models/unet#diffusers.models.unets.unet_1d.UNet1DOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an [UNet1DOutput](/docs/diffusers/main/en/api/models/unet#diffusers.models.unets.unet_1d.UNet1DOutput) is returned, otherwise a `tuple` is
returned where the first element is the sample tensor.</retdesc></docstring>

The [UNet1DModel](/docs/diffusers/main/en/api/models/unet#diffusers.UNet1DModel) forward method.








</div></div>

## UNet1DOutput[[diffusers.models.unets.unet_1d.UNet1DOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.unet_1d.UNet1DOutput</name><anchor>diffusers.models.unets.unet_1d.UNet1DOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_1d.py#L29</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, sample_size)`) --
  The hidden states output from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [UNet1DModel](/docs/diffusers/main/en/api/models/unet#diffusers.UNet1DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/unet.md" />

### AutoencoderKLMagvit
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_magvit.md

# AutoencoderKLMagvit

The 3D variational autoencoder (VAE) model with KL loss used in [EasyAnimate](https://github.com/aigc-apps/EasyAnimate) was introduced by Alibaba PAI.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLMagvit

vae = AutoencoderKLMagvit.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="vae", torch_dtype=torch.float16).to("cuda")
```

## AutoencoderKLMagvit[[diffusers.AutoencoderKLMagvit]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLMagvit</name><anchor>diffusers.AutoencoderKLMagvit</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L666</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "latent_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = [128, 256, 512, 512]"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ['SpatialDownBlock3D', 'SpatialTemporalDownBlock3D', 'SpatialTemporalDownBlock3D', 'SpatialTemporalDownBlock3D']"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ['SpatialUpBlock3D', 'SpatialTemporalUpBlock3D', 'SpatialTemporalUpBlock3D', 'SpatialTemporalUpBlock3D']"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "scaling_factor", "val": ": float = 0.7125"}, {"name": "spatial_group_norm", "val": ": bool = True"}]</parameters></docstring>

A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This
model is used in [EasyAnimate](https://huggingface.co/papers/2405.18991).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLMagvit.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLMagvit.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLMagvit.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L822</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLMagvit.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L808</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLMagvit.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L815</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLMagvit.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L772</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_num_frames", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_num_frames", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKLMagvit.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_magvit.py#L1068</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_magvit.md" />

### WanTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/wan_transformer_3d.md

# WanTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [Wan 2.1](https://github.com/Wan-Video/Wan2.1) by the Alibaba Wan Team.

The model can be loaded with the following code snippet.

```python
from diffusers import WanTransformer3DModel

transformer = WanTransformer3DModel.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## WanTransformer3DModel[[diffusers.WanTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.WanTransformer3DModel</name><anchor>diffusers.WanTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_wan.py#L501</source><parameters>[{"name": "patch_size", "val": ": typing.Tuple[int] = (1, 2, 2)"}, {"name": "num_attention_heads", "val": ": int = 40"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "text_dim", "val": ": int = 4096"}, {"name": "freq_dim", "val": ": int = 256"}, {"name": "ffn_dim", "val": ": int = 13824"}, {"name": "num_layers", "val": ": int = 40"}, {"name": "cross_attn_norm", "val": ": bool = True"}, {"name": "qk_norm", "val": ": typing.Optional[str] = 'rms_norm_across_heads'"}, {"name": "eps", "val": ": float = 1e-06"}, {"name": "image_dim", "val": ": typing.Optional[int] = None"}, {"name": "added_kv_proj_dim", "val": ": typing.Optional[int] = None"}, {"name": "rope_max_seq_len", "val": ": int = 1024"}, {"name": "pos_embed_seq_len", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **patch_size** (`Tuple[int]`, defaults to `(1, 2, 2)`) --
  3D patch dimensions for video embedding (t_patch, h_patch, w_patch).
- **num_attention_heads** (`int`, defaults to `40`) --
  Fixed length for text embeddings.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of channels in each head.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **text_dim** (`int`, defaults to `512`) --
  Input dimension for text embeddings.
- **freq_dim** (`int`, defaults to `256`) --
  Dimension for sinusoidal time embeddings.
- **ffn_dim** (`int`, defaults to `13824`) --
  Intermediate dimension in feed-forward network.
- **num_layers** (`int`, defaults to `40`) --
  The number of layers of transformer blocks to use.
- **window_size** (`Tuple[int]`, defaults to `(-1, -1)`) --
  Window size for local attention (-1 indicates global attention).
- **cross_attn_norm** (`bool`, defaults to `True`) --
  Enable cross-attention normalization.
- **qk_norm** (`bool`, defaults to `True`) --
  Enable query/key normalization.
- **eps** (`float`, defaults to `1e-6`) --
  Epsilon value for normalization layers.
- **add_img_emb** (`bool`, defaults to `False`) --
  Whether to use img_emb.
- **added_kv_proj_dim** (`int`, *optional*, defaults to `None`) --
  The number of channels to use for the added key and value projections. If `None`, no projection is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data used in the Wan model.




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/wan_transformer_3d.md" />

### AuraFlowTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/aura_flow_transformer2d.md

# AuraFlowTransformer2DModel

A Transformer model for image-like data from [AuraFlow](https://blog.fal.ai/auraflow/).

## AuraFlowTransformer2DModel[[diffusers.AuraFlowTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AuraFlowTransformer2DModel</name><anchor>diffusers.AuraFlowTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/auraflow_transformer_2d.py#L278</source><parameters>[{"name": "sample_size", "val": ": int = 64"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "num_mmdit_layers", "val": ": int = 4"}, {"name": "num_single_dit_layers", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 256"}, {"name": "num_attention_heads", "val": ": int = 12"}, {"name": "joint_attention_dim", "val": ": int = 2048"}, {"name": "caption_projection_dim", "val": ": int = 3072"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "pos_embed_max_size", "val": ": int = 1024"}]</parameters><paramsdesc>- **sample_size** (`int`) -- The width of the latent images. This is fixed during training since
  it is used to learn a number of position embeddings.
- **patch_size** (`int`) -- Patch size to turn the input data into small patches.
- **in_channels** (`int`, *optional*, defaults to 4) -- The number of channels in the input.
- **num_mmdit_layers** (`int`, *optional*, defaults to 4) -- The number of layers of MMDiT Transformer blocks to use.
- **num_single_dit_layers** (`int`, *optional*, defaults to 32) --
  The number of layers of Transformer blocks to use. These blocks use concatenated image and text
  representations.
- **attention_head_dim** (`int`, *optional*, defaults to 256) -- The number of channels in each head.
- **num_attention_heads** (`int`, *optional*, defaults to 12) -- The number of heads to use for multi-head attention.
- **joint_attention_dim** (`int`, *optional*) -- The number of `encoder_hidden_states` dimensions to use.
- **caption_projection_dim** (`int`) -- Number of dimensions to use when projecting the `encoder_hidden_states`.
- **out_channels** (`int`, defaults to 4) -- Number of output channels.
- **pos_embed_max_size** (`int`, defaults to 1024) -- Maximum positions to embed from the image latents.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D Transformer model as introduced in AuraFlow (https://blog.fal.ai/auraflow/).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.AuraFlowTransformer2DModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/auraflow_transformer_2d.py#L429</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.AuraFlowTransformer2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/auraflow_transformer_2d.py#L394</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.AuraFlowTransformer2DModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/auraflow_transformer_2d.py#L451</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/aura_flow_transformer2d.md" />

### LatteTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/latte_transformer3d.md

## LatteTransformer3DModel

A Diffusion Transformer model for 3D data from [Latte](https://github.com/Vchitect/Latte).

## LatteTransformer3DModel[[diffusers.LatteTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LatteTransformer3DModel</name><anchor>diffusers.LatteTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py#L29</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 88"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 1"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "attention_bias", "val": ": bool = False"}, {"name": "sample_size", "val": ": int = 64"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "activation_fn", "val": ": str = 'geglu'"}, {"name": "num_embeds_ada_norm", "val": ": typing.Optional[int] = None"}, {"name": "norm_type", "val": ": str = 'layer_norm'"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "caption_channels", "val": ": int = None"}, {"name": "video_length", "val": ": int = 16"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.LatteTransformer3DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/latte_transformer_3d.py#L168</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "enable_temporal_attentions", "val": ": bool = True"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** shape `(batch size, channel, num_frame, height, width)` --
  Input `hidden_states`.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **encoder_hidden_states** ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
  self-attention.
- **encoder_attention_mask** ( `torch.Tensor`, *optional*) --
  Cross-attention mask applied to `encoder_hidden_states`. Two formats supported:

  * Mask `(batcheight, sequence_length)` True = keep, False = discard.
  * Bias `(batcheight, 1, sequence_length)` 0 = keep, -10000 = discard.

  If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format
  above. This bias will be added to the cross-attention scores.
- **enable_temporal_attentions** --
  (`bool`, *optional*, defaults to `True`): Whether to enable temporal attentions.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.unet_2d_condition.UNet2DConditionOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [LatteTransformer3DModel](/docs/diffusers/main/en/api/models/latte_transformer3d#diffusers.LatteTransformer3DModel) forward method.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/latte_transformer3d.md" />

### QwenImageTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/qwenimage_transformer2d.md

# QwenImageTransformer2DModel

The model can be loaded with the following code snippet.

```python
from diffusers import QwenImageTransformer2DModel

transformer = QwenImageTransformer2DModel.from_pretrained("Qwen/QwenImage-20B", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## QwenImageTransformer2DModel[[diffusers.QwenImageTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.QwenImageTransformer2DModel</name><anchor>diffusers.QwenImageTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_qwenimage.py#L476</source><parameters>[{"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "out_channels", "val": ": typing.Optional[int] = 16"}, {"name": "num_layers", "val": ": int = 60"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 3584"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "axes_dims_rope", "val": ": typing.Tuple[int, int, int] = (16, 56, 56)"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `2`) --
  Patch size to turn the input data into small patches.
- **in_channels** (`int`, defaults to `64`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `None`) --
  The number of channels in the output. If not specified, it defaults to `in_channels`.
- **num_layers** (`int`, defaults to `60`) --
  The number of layers of dual stream DiT blocks to use.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of dimensions to use for each attention head.
- **num_attention_heads** (`int`, defaults to `24`) --
  The number of attention heads to use.
- **joint_attention_dim** (`int`, defaults to `3584`) --
  The number of dimensions to use for the joint attention (embedding/channel dimension of
  `encoder_hidden_states`).
- **guidance_embeds** (`bool`, defaults to `False`) --
  Whether to use guidance embeddings for guidance-distilled variant of the model.
- **axes_dims_rope** (`Tuple[int]`, defaults to `(16, 56, 56)`) --
  The dimensions to use for the rotary positional embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in Qwen.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.QwenImageTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_qwenimage.py#L563</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "encoder_hidden_states_mask", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_shapes", "val": ": typing.Optional[typing.List[typing.Tuple[int, int, int]]] = None"}, {"name": "txt_seq_lens", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch_size, image_sequence_length, in_channels)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch_size, text_sequence_length, joint_attention_dim)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **encoder_hidden_states_mask** (`torch.Tensor` of shape `(batch_size, text_sequence_length)`) --
  Mask of the input conditions.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The `QwenTransformer2DModel` forward method.






</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/qwenimage_transformer2d.md" />

### UNet2DModel
https://huggingface.co/docs/diffusers/main/api/models/unet2d.md

# UNet2DModel

The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.

The abstract from the paper is:

*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*

## UNet2DModel[[diffusers.UNet2DModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UNet2DModel</name><anchor>diffusers.UNet2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d.py#L40</source><parameters>[{"name": "sample_size", "val": ": typing.Union[int, typing.Tuple[int, int], NoneType] = None"}, {"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "center_input_sample", "val": ": bool = False"}, {"name": "time_embedding_type", "val": ": str = 'positional'"}, {"name": "time_embedding_dim", "val": ": typing.Optional[int] = None"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D')"}, {"name": "mid_block_type", "val": ": typing.Optional[str] = 'UNetMidBlock2D'"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (224, 448, 672, 896)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "downsample_type", "val": ": str = 'conv'"}, {"name": "upsample_type", "val": ": str = 'conv'"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "attention_head_dim", "val": ": typing.Optional[int] = 8"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "attn_norm_num_groups", "val": ": typing.Optional[int] = None"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "add_attention", "val": ": bool = True"}, {"name": "class_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "num_class_embeds", "val": ": typing.Optional[int] = None"}, {"name": "num_train_timesteps", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample_size** (`int` or `Tuple[int, int]`, *optional*, defaults to `None`) --
  Height and width of input/output sample. Dimensions must be a multiple of `2 ** (len(block_out_channels) -
  1)`.
- **in_channels** (`int`, *optional*, defaults to 3) -- Number of channels in the input sample.
- **out_channels** (`int`, *optional*, defaults to 3) -- Number of channels in the output.
- **center_input_sample** (`bool`, *optional*, defaults to `False`) -- Whether to center the input sample.
- **time_embedding_type** (`str`, *optional*, defaults to `"positional"`) -- Type of time embedding to use.
- **freq_shift** (`int`, *optional*, defaults to 0) -- Frequency shift for Fourier time embedding.
- **flip_sin_to_cos** (`bool`, *optional*, defaults to `True`) --
  Whether to flip sin to cos for Fourier time embedding.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")`) --
  Tuple of downsample block types.
- **mid_block_type** (`str`, *optional*, defaults to `"UNetMidBlock2D"`) --
  Block type for middle of UNet, it can be either `UNetMidBlock2D` or `None`.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")`) --
  Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(224, 448, 672, 896)`) --
  Tuple of block output channels.
- **layers_per_block** (`int`, *optional*, defaults to `2`) -- The number of layers per block.
- **mid_block_scale_factor** (`float`, *optional*, defaults to `1`) -- The scale factor for the mid block.
- **downsample_padding** (`int`, *optional*, defaults to `1`) -- The padding for the downsample convolution.
- **downsample_type** (`str`, *optional*, defaults to `conv`) --
  The downsample type for downsampling layers. Choose between "conv" and "resnet"
- **upsample_type** (`str`, *optional*, defaults to `conv`) --
  The upsample type for upsampling layers. Choose between "conv" and "resnet"
- **dropout** (`float`, *optional*, defaults to 0.0) -- The dropout probability to use.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **attention_head_dim** (`int`, *optional*, defaults to `8`) -- The attention head dimension.
- **norm_num_groups** (`int`, *optional*, defaults to `32`) -- The number of groups for normalization.
- **attn_norm_num_groups** (`int`, *optional*, defaults to `None`) --
  If set to an integer, a group norm layer will be created in the mid block's `Attention` layer with the
  given number of groups. If left as `None`, the group norm layer will only be created if
  `resnet_time_scale_shift` is set to `default`, and if created will have `norm_num_groups` groups.
- **norm_eps** (`float`, *optional*, defaults to `1e-5`) -- The epsilon for normalization.
- **resnet_time_scale_shift** (`str`, *optional*, defaults to `"default"`) -- Time scale shift config
  for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **class_embed_type** (`str`, *optional*, defaults to `None`) --
  The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`,
  `"timestep"`, or `"identity"`.
- **num_class_embeds** (`int`, *optional*, defaults to `None`) --
  Input dimension of the learnable embedding matrix to be projected to `time_embed_dim` when performing class
  conditioning with `class_embed_type` equal to `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.UNet2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d.py#L250</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch, channel, height, width)`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DOutput](/docs/diffusers/main/en/api/models/unet2d#diffusers.models.unets.unet_2d.UNet2DOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[UNet2DOutput](/docs/diffusers/main/en/api/models/unet2d#diffusers.models.unets.unet_2d.UNet2DOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an [UNet2DOutput](/docs/diffusers/main/en/api/models/unet2d#diffusers.models.unets.unet_2d.UNet2DOutput) is returned, otherwise a `tuple` is
returned where the first element is the sample tensor.</retdesc></docstring>

The [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) forward method.








</div></div>

## UNet2DOutput[[diffusers.models.unets.unet_2d.UNet2DOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.unet_2d.UNet2DOutput</name><anchor>diffusers.models.unets.unet_2d.UNet2DOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d.py#L28</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The hidden states output from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/unet2d.md" />

### StableCascadeUNet
https://huggingface.co/docs/diffusers/main/api/models/stable_cascade_unet.md

# StableCascadeUNet

A UNet model from the [Stable Cascade pipeline](../pipelines/stable_cascade.md).

## StableCascadeUNet[[diffusers.models.StableCascadeUNet]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.StableCascadeUNet</name><anchor>diffusers.models.StableCascadeUNet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_stable_cascade.py#L137</source><parameters>[{"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "timestep_ratio_embedding_dim", "val": ": int = 64"}, {"name": "patch_size", "val": ": int = 1"}, {"name": "conditioning_dim", "val": ": int = 2048"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (2048, 2048)"}, {"name": "num_attention_heads", "val": ": typing.Tuple[int] = (32, 32)"}, {"name": "down_num_layers_per_block", "val": ": typing.Tuple[int] = (8, 24)"}, {"name": "up_num_layers_per_block", "val": ": typing.Tuple[int] = (24, 8)"}, {"name": "down_blocks_repeat_mappers", "val": ": typing.Optional[typing.Tuple[int]] = (1, 1)"}, {"name": "up_blocks_repeat_mappers", "val": ": typing.Optional[typing.Tuple[int]] = (1, 1)"}, {"name": "block_types_per_layer", "val": ": typing.Tuple[typing.Tuple[str]] = (('SDCascadeResBlock', 'SDCascadeTimestepBlock', 'SDCascadeAttnBlock'), ('SDCascadeResBlock', 'SDCascadeTimestepBlock', 'SDCascadeAttnBlock'))"}, {"name": "clip_text_in_channels", "val": ": typing.Optional[int] = None"}, {"name": "clip_text_pooled_in_channels", "val": " = 1280"}, {"name": "clip_image_in_channels", "val": ": typing.Optional[int] = None"}, {"name": "clip_seq", "val": " = 4"}, {"name": "effnet_in_channels", "val": ": typing.Optional[int] = None"}, {"name": "pixel_mapper_in_channels", "val": ": typing.Optional[int] = None"}, {"name": "kernel_size", "val": " = 3"}, {"name": "dropout", "val": ": typing.Union[float, typing.Tuple[float]] = (0.1, 0.1)"}, {"name": "self_attn", "val": ": typing.Union[bool, typing.Tuple[bool]] = True"}, {"name": "timestep_conditioning_type", "val": ": typing.Tuple[str] = ('sca', 'crp')"}, {"name": "switch_level", "val": ": typing.Optional[typing.Tuple[bool]] = None"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/stable_cascade_unet.md" />

### CogVideoXTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/cogvideox_transformer3d.md

# CogVideoXTransformer3DModel

A Diffusion Transformer model for 3D data from [CogVideoX](https://github.com/THUDM/CogVideo) was introduced in [CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://github.com/THUDM/CogVideo/blob/main/resources/CogVideoX.pdf) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from diffusers import CogVideoXTransformer3DModel

transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-2b", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
```

## CogVideoXTransformer3DModel[[diffusers.CogVideoXTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXTransformer3DModel</name><anchor>diffusers.CogVideoXTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/cogvideox_transformer_3d.py#L160</source><parameters>[{"name": "num_attention_heads", "val": ": int = 30"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": typing.Optional[int] = 16"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "ofs_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "num_layers", "val": ": int = 30"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_width", "val": ": int = 90"}, {"name": "sample_height", "val": ": int = 60"}, {"name": "sample_frames", "val": ": int = 49"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "patch_size_t", "val": ": typing.Optional[int] = None"}, {"name": "temporal_compression_ratio", "val": ": int = 4"}, {"name": "max_text_seq_length", "val": ": int = 226"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "timestep_activation_fn", "val": ": str = 'silu'"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "spatial_interpolation_scale", "val": ": float = 1.875"}, {"name": "temporal_interpolation_scale", "val": ": float = 1.0"}, {"name": "use_rotary_positional_embeddings", "val": ": bool = False"}, {"name": "use_learned_positional_embeddings", "val": ": bool = False"}, {"name": "patch_bias", "val": ": bool = True"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, defaults to `30`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `16`) --
  The number of channels in the output.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **time_embed_dim** (`int`, defaults to `512`) --
  Output dimension of timestep embeddings.
- **ofs_embed_dim** (`int`, defaults to `512`) --
  Output dimension of "ofs" embeddings used in CogVideoX-5b-I2B in version 1.5
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **num_layers** (`int`, defaults to `30`) --
  The number of layers of Transformer blocks to use.
- **dropout** (`float`, defaults to `0.0`) --
  The dropout probability to use.
- **attention_bias** (`bool`, defaults to `True`) --
  Whether to use bias in the attention projection layers.
- **sample_width** (`int`, defaults to `90`) --
  The width of the input latents.
- **sample_height** (`int`, defaults to `60`) --
  The height of the input latents.
- **sample_frames** (`int`, defaults to `49`) --
  The number of frames in the input latents. Note that this parameter was incorrectly initialized to 49
  instead of 13 because CogVideoX processed 13 latent frames at once in its default and recommended settings,
  but cannot be changed to the correct value to ensure backwards compatibility. To create a transformer with
  K latent frames, the correct value to pass here would be: ((K - 1) * temporal_compression_ratio + 1).
- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **temporal_compression_ratio** (`int`, defaults to `4`) --
  The compression ratio across the temporal dimension. See documentation for `sample_frames`.
- **max_text_seq_length** (`int`, defaults to `226`) --
  The maximum sequence length of the input text embeddings.
- **activation_fn** (`str`, defaults to `"gelu-approximate"`) --
  Activation function to use in feed-forward.
- **timestep_activation_fn** (`str`, defaults to `"silu"`) --
  Activation function to use when generating the timestep embeddings.
- **norm_elementwise_affine** (`bool`, defaults to `True`) --
  Whether to use elementwise affine in normalization layers.
- **norm_eps** (`float`, defaults to `1e-5`) --
  The epsilon value to use in normalization layers.
- **spatial_interpolation_scale** (`float`, defaults to `1.875`) --
  Scaling factor to apply in 3D positional embeddings across spatial dimensions.
- **temporal_interpolation_scale** (`float`, defaults to `1.0`) --
  Scaling factor to apply in 3D positional embeddings across temporal dimensions.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data in [CogVideoX](https://github.com/THUDM/CogVideo).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.CogVideoXTransformer3DModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/cogvideox_transformer_3d.py#L395</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.CogVideoXTransformer3DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/cogvideox_transformer_3d.py#L360</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.CogVideoXTransformer3DModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/cogvideox_transformer_3d.py#L417</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/cogvideox_transformer3d.md" />

### Lumina2Transformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/lumina2_transformer2d.md

# Lumina2Transformer2DModel

A Diffusion Transformer model for 3D video-like data was introduced in [Lumina Image 2.0](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0) by Alpha-VLLM.

The model can be loaded with the following code snippet.

```python
from diffusers import Lumina2Transformer2DModel

transformer = Lumina2Transformer2DModel.from_pretrained("Alpha-VLLM/Lumina-Image-2.0", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## Lumina2Transformer2DModel[[diffusers.Lumina2Transformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Lumina2Transformer2DModel</name><anchor>diffusers.Lumina2Transformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_lumina2.py#L325</source><parameters>[{"name": "sample_size", "val": ": int = 128"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "hidden_size", "val": ": int = 2304"}, {"name": "num_layers", "val": ": int = 26"}, {"name": "num_refiner_layers", "val": ": int = 2"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "num_kv_heads", "val": ": int = 8"}, {"name": "multiple_of", "val": ": int = 256"}, {"name": "ffn_dim_multiplier", "val": ": typing.Optional[float] = None"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "scaling_factor", "val": ": float = 1.0"}, {"name": "axes_dim_rope", "val": ": typing.Tuple[int, int, int] = (32, 32, 32)"}, {"name": "axes_lens", "val": ": typing.Tuple[int, int, int] = (300, 512, 512)"}, {"name": "cap_feat_dim", "val": ": int = 1024"}]</parameters><paramsdesc>- **sample_size** (`int`) -- The width of the latent images. This is fixed during training since
  it is used to learn a number of position embeddings.
- **patch_size** (`int`, *optional*, (`int`, *optional*, defaults to 2) --
  The size of each patch in the image. This parameter defines the resolution of patches fed into the model.
- **in_channels** (`int`, *optional*, defaults to 4) --
  The number of input channels for the model. Typically, this matches the number of channels in the input
  images.
- **hidden_size** (`int`, *optional*, defaults to 4096) --
  The dimensionality of the hidden layers in the model. This parameter determines the width of the model's
  hidden representations.
- **num_layers** (`int`, *optional*, default to 32) --
  The number of layers in the model. This defines the depth of the neural network.
- **num_attention_heads** (`int`, *optional*, defaults to 32) --
  The number of attention heads in each attention layer. This parameter specifies how many separate attention
  mechanisms are used.
- **num_kv_heads** (`int`, *optional*, defaults to 8) --
  The number of key-value heads in the attention mechanism, if different from the number of attention heads.
  If None, it defaults to num_attention_heads.
- **multiple_of** (`int`, *optional*, defaults to 256) --
  A factor that the hidden size should be a multiple of. This can help optimize certain hardware
  configurations.
- **ffn_dim_multiplier** (`float`, *optional*) --
  A multiplier for the dimensionality of the feed-forward network. If None, it uses a default value based on
  the model configuration.
- **norm_eps** (`float`, *optional*, defaults to 1e-5) --
  A small value added to the denominator for numerical stability in normalization layers.
- **scaling_factor** (`float`, *optional*, defaults to 1.0) --
  A scaling factor applied to certain parameters or layers in the model. This can be used for adjusting the
  overall scale of the model's operations.</paramsdesc><paramgroups>0</paramgroups></docstring>

Lumina2NextDiT: Diffusion model with a Transformer backbone.




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/lumina2_transformer2d.md" />

### AutoencoderKLAllegro
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_allegro.md

# AutoencoderKLAllegro

The 3D variational autoencoder (VAE) model with KL loss used in [Allegro](https://github.com/rhymes-ai/Allegro) was introduced in [Allegro: Open the Black Box of Commercial-Level Video Generation Model](https://huggingface.co/papers/2410.15458) by RhymesAI.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLAllegro

vae = AutoencoderKLAllegro.from_pretrained("rhymes-ai/Allegro", subfolder="vae", torch_dtype=torch.float32).to("cuda")
```

## AutoencoderKLAllegro[[diffusers.AutoencoderKLAllegro]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLAllegro</name><anchor>diffusers.AutoencoderKLAllegro</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L676</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('AllegroDownBlock3D', 'AllegroDownBlock3D', 'AllegroDownBlock3D', 'AllegroDownBlock3D')"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('AllegroUpBlock3D', 'AllegroUpBlock3D', 'AllegroUpBlock3D', 'AllegroUpBlock3D')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512)"}, {"name": "temporal_downsample_blocks", "val": ": typing.Tuple[bool, ...] = (True, True, False, False)"}, {"name": "temporal_upsample_blocks", "val": ": typing.Tuple[bool, ...] = (False, True, True, False)"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "temporal_compression_ratio", "val": ": float = 4"}, {"name": "sample_size", "val": ": int = 320"}, {"name": "scaling_factor", "val": ": float = 0.13"}, {"name": "force_upcast", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (int, defaults to `3`) --
  Number of channels in the input image.
- **out_channels** (int, defaults to `3`) --
  Number of channels in the output.
- **down_block_types** (`Tuple[str, ...]`, defaults to `("AllegroDownBlock3D", "AllegroDownBlock3D", "AllegroDownBlock3D", "AllegroDownBlock3D")`) --
  Tuple of strings denoting which types of down blocks to use.
- **up_block_types** (`Tuple[str, ...]`, defaults to `("AllegroUpBlock3D", "AllegroUpBlock3D", "AllegroUpBlock3D", "AllegroUpBlock3D")`) --
  Tuple of strings denoting which types of up blocks to use.
- **block_out_channels** (`Tuple[int, ...]`, defaults to `(128, 256, 512, 512)`) --
  Tuple of integers denoting number of output channels in each block.
- **temporal_downsample_blocks** (`Tuple[bool, ...]`, defaults to `(True, True, False, False)`) --
  Tuple of booleans denoting which blocks to enable temporal downsampling in.
- **latent_channels** (`int`, defaults to `4`) --
  Number of channels in latents.
- **layers_per_block** (`int`, defaults to `2`) --
  Number of resnet or attention or temporal convolution layers per down/up block.
- **act_fn** (`str`, defaults to `"silu"`) --
  The activation function to use.
- **norm_num_groups** (`int`, defaults to `32`) --
  Number of groups to use in normalization layers.
- **temporal_compression_ratio** (`int`, defaults to `4`) --
  Ratio by which temporal dimension of samples are compressed.
- **sample_size** (`int`, defaults to `320`) --
  Default latent size.
- **scaling_factor** (`float`, defaults to `0.13235`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **force_upcast** (`bool`, default to `True`) --
  If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
  can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
  can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix</paramsdesc><paramgroups>0</paramgroups></docstring>

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Used in
[Allegro](https://github.com/rhymes-ai/Allegro).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLAllegro.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLAllegro.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLAllegro.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L820</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLAllegro.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L806</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLAllegro.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L813</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLAllegro.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L798</source><parameters>[]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKLAllegro.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_allegro.py#L1070</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.
- **generator** (`torch.Generator`, *optional*) --
  PyTorch random number generator.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_allegro.md" />

### VQModel
https://huggingface.co/docs/diffusers/main/api/models/vq.md

# VQModel

The VQ-VAE model was introduced in [Neural Discrete Representation Learning](https://huggingface.co/papers/1711.00937) by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL), the [VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel) works in a quantized latent space.

The abstract from the paper is:

*Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.*

## VQModel[[diffusers.VQModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.VQModel</name><anchor>diffusers.VQModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vq_model.py#L40</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('DownEncoderBlock2D',)"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('UpDecoderBlock2D',)"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (64,)"}, {"name": "layers_per_block", "val": ": int = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "latent_channels", "val": ": int = 3"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "num_vq_embeddings", "val": ": int = 256"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "vq_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "scaling_factor", "val": ": float = 0.18215"}, {"name": "norm_type", "val": ": str = 'group'"}, {"name": "mid_block_add_attention", "val": " = True"}, {"name": "lookup_from_codebook", "val": " = False"}, {"name": "force_upcast", "val": " = False"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int,  *optional*, defaults to 3) -- Number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`) --
  Tuple of downsample block types.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`) --
  Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of block output channels.
- **layers_per_block** (`int`, *optional*, defaults to `1`) -- Number of layers per block.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **latent_channels** (`int`, *optional*, defaults to `3`) -- Number of channels in the latent space.
- **sample_size** (`int`, *optional*, defaults to `32`) -- Sample input size.
- **num_vq_embeddings** (`int`, *optional*, defaults to `256`) -- Number of codebook vectors in the VQ-VAE.
- **norm_num_groups** (`int`, *optional*, defaults to `32`) -- Number of groups for normalization layers.
- **vq_embed_dim** (`int`, *optional*) -- Hidden dim of codebook vectors in the VQ-VAE.
- **scaling_factor** (`float`, *optional*, defaults to `0.18215`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **norm_type** (`str`, *optional*, defaults to `"group"`) --
  Type of normalization layer to use. Can be one of `"group"` or `"spatial"`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A VQ-VAE model for decoding latent representations.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.VQModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vq_model.py#L163</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [models.autoencoders.vq_model.VQEncoderOutput](/docs/diffusers/main/en/api/models/vq#diffusers.models.autoencoders.vq_model.VQEncoderOutput) instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[VQEncoderOutput](/docs/diffusers/main/en/api/models/vq#diffusers.models.autoencoders.vq_model.VQEncoderOutput) or `tuple`</rettype><retdesc>If return_dict is True, a [VQEncoderOutput](/docs/diffusers/main/en/api/models/vq#diffusers.models.autoencoders.vq_model.VQEncoderOutput) is returned, otherwise a
plain `tuple` is returned.</retdesc></docstring>

The [VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel) forward method.








</div></div>

## VQEncoderOutput[[diffusers.models.autoencoders.vq_model.VQEncoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vq_model.VQEncoderOutput</name><anchor>diffusers.models.autoencoders.vq_model.VQEncoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vq_model.py#L28</source><parameters>[{"name": "latents", "val": ": Tensor"}]</parameters><paramsdesc>- **latents** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The encoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of VQModel encoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/vq.md" />

### Models
https://huggingface.co/docs/diffusers/main/api/models/overview.md

# Models

🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution  \\(p_{\theta}(x_{t-1}|x_{t})\\).

All models are built from the base [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin) class which is a [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) providing basic functionality for saving and loading models, locally and from the Hugging Face Hub.

## ModelMixin[[diffusers.ModelMixin]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ModelMixin</name><anchor>diffusers.ModelMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L233</source><parameters>[]</parameters></docstring>

Base class for all models.

[ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin) takes care of storing the model configuration and provides methods for loading, downloading and
saving models.

- **config_name** (`str`) -- Filename to save a model to when calling [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>compile_repeated_blocks</name><anchor>diffusers.ModelMixin.compile_repeated_blocks</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L1447</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Compiles *only* the frequently repeated sub-modules of a model (e.g. the Transformer layers) instead of
compiling the entire model. This technique—often called **regional compilation** (see the PyTorch recipe
https://docs.pytorch.org/tutorials/recipes/regional_compilation.html) can reduce end-to-end compile time
substantially, while preserving the runtime speed-ups you would expect from a full `torch.compile`.

The set of sub-modules to compile is discovered by the presence of **`_repeated_blocks`** attribute in the
model definition. Define this attribute on your model subclass as a list/tuple of class names (strings). Every
module whose class name matches will be compiled.

Once discovered, each matching sub-module is compiled by calling `submodule.compile(*args, **kwargs)`. Any
positional or keyword arguments you supply to `compile_repeated_blocks` are forwarded verbatim to
`torch.compile`.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dequantize</name><anchor>diffusers.ModelMixin.dequantize</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L794</source><parameters>[]</parameters></docstring>

Potentially dequantize the model in case it has been quantized by a quantization method that support
dequantization.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_gradient_checkpointing</name><anchor>diffusers.ModelMixin.disable_gradient_checkpointing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L315</source><parameters>[]</parameters></docstring>

Deactivates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or
*checkpoint activations* in other frameworks).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_npu_flash_attention</name><anchor>diffusers.ModelMixin.disable_npu_flash_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L346</source><parameters>[]</parameters></docstring>

disable npu flash attention from torch_npu



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xformers_memory_efficient_attention</name><anchor>diffusers.ModelMixin.disable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L431</source><parameters>[]</parameters></docstring>

Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_xla_flash_attention</name><anchor>diffusers.ModelMixin.disable_xla_flash_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L376</source><parameters>[]</parameters></docstring>

Disable the flash attention pallals kernel for torch_xla.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_gradient_checkpointing</name><anchor>diffusers.ModelMixin.enable_gradient_checkpointing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L285</source><parameters>[{"name": "gradient_checkpointing_func", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **gradient_checkpointing_func** (`Callable`, *optional*) --
  The function to use for gradient checkpointing. If `None`, the default PyTorch checkpointing function
  is used (`torch.utils.checkpoint.checkpoint`).</paramsdesc><paramgroups>0</paramgroups></docstring>

Activates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or
*checkpoint activations* in other frameworks).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_group_offload</name><anchor>diffusers.ModelMixin.enable_group_offload</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L522</source><parameters>[{"name": "onload_device", "val": ": device"}, {"name": "offload_device", "val": ": device = device(type='cpu')"}, {"name": "offload_type", "val": ": str = 'block_level'"}, {"name": "num_blocks_per_group", "val": ": typing.Optional[int] = None"}, {"name": "non_blocking", "val": ": bool = False"}, {"name": "use_stream", "val": ": bool = False"}, {"name": "record_stream", "val": ": bool = False"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "offload_to_disk_path", "val": ": typing.Optional[str] = None"}]</parameters></docstring>

Activates group offloading for the current model.

See [apply_group_offloading()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_group_offloading) for more information.

<ExampleCodeBlock anchor="diffusers.ModelMixin.enable_group_offload.example">

Example:

```python
>>> from diffusers import CogVideoXTransformer3DModel

>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
...     "THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16
... )

>>> transformer.enable_group_offload(
...     onload_device=torch.device("cuda"),
...     offload_device=torch.device("cpu"),
...     offload_type="leaf_level",
...     use_stream=True,
... )
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_layerwise_casting</name><anchor>diffusers.ModelMixin.enable_layerwise_casting</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L437</source><parameters>[{"name": "storage_dtype", "val": ": dtype = torch.float8_e4m3fn"}, {"name": "compute_dtype", "val": ": typing.Optional[torch.dtype] = None"}, {"name": "skip_modules_pattern", "val": ": typing.Optional[typing.Tuple[str, ...]] = None"}, {"name": "skip_modules_classes", "val": ": typing.Optional[typing.Tuple[typing.Type[torch.nn.modules.module.Module], ...]] = None"}, {"name": "non_blocking", "val": ": bool = False"}]</parameters><paramsdesc>- **storage_dtype** (`torch.dtype`) --
  The dtype to which the model should be cast for storage.
- **compute_dtype** (`torch.dtype`) --
  The dtype to which the model weights should be cast during the forward pass.
- **skip_modules_pattern** (`Tuple[str, ...]`, *optional*) --
  A list of patterns to match the names of the modules to skip during the layerwise casting process. If
  set to `None`, default skip patterns are used to ignore certain internal layers of modules and PEFT
  layers.
- **skip_modules_classes** (`Tuple[Type[torch.nn.Module], ...]`, *optional*) --
  A list of module classes to skip during the layerwise casting process.
- **non_blocking** (`bool`, *optional*, defaults to `False`) --
  If `True`, the weight casting operations are non-blocking.</paramsdesc><paramgroups>0</paramgroups></docstring>

Activates layerwise casting for the current model.

Layerwise casting is a technique that casts the model weights to a lower precision dtype for storage but
upcasts them on-the-fly to a higher precision dtype for computation. This process can significantly reduce the
memory footprint from model weights, but may lead to some quality degradation in the outputs. Most degradations
are negligible, mostly stemming from weight casting in normalization and modulation layers.

By default, most models in diffusers set the `_skip_layerwise_casting_patterns` attribute to ignore patch
embedding, positional embedding and normalization layers. This is because these layers are most likely
precision-critical for quality. If you wish to change this behavior, you can set the
`_skip_layerwise_casting_patterns` attribute to `None`, or call
[apply_layerwise_casting()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_layerwise_casting) with custom arguments.

Example:
<ExampleCodeBlock anchor="diffusers.ModelMixin.enable_layerwise_casting.example">

Using [enable_layerwise_casting()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.enable_layerwise_casting):

```python
>>> from diffusers import CogVideoXTransformer3DModel

>>> transformer = CogVideoXTransformer3DModel.from_pretrained(
...     "THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16
... )

>>> # Enable layerwise casting via the model, which ignores certain modules by default
>>> transformer.enable_layerwise_casting(storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16)
```

</ExampleCodeBlock>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_npu_flash_attention</name><anchor>diffusers.ModelMixin.enable_npu_flash_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L339</source><parameters>[]</parameters></docstring>

Enable npu flash attention from torch_npu



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xformers_memory_efficient_attention</name><anchor>diffusers.ModelMixin.enable_xformers_memory_efficient_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L399</source><parameters>[{"name": "attention_op", "val": ": typing.Optional[typing.Callable] = None"}]</parameters><paramsdesc>- **attention_op** (`Callable`, *optional*) --
  Override the default `None` operator for use as `op` argument to the
  [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
  function of xFormers.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).

When this option is enabled, you should observe lower GPU memory usage and a potential speed up during
inference. Speed up during training is not guaranteed.

> [!WARNING] > ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient
attention takes > precedent.



<ExampleCodeBlock anchor="diffusers.ModelMixin.enable_xformers_memory_efficient_attention.example">

Examples:

```py
>>> import torch
>>> from diffusers import UNet2DConditionModel
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> model = UNet2DConditionModel.from_pretrained(
...     "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16
... )
>>> model = model.to("cuda")
>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_xla_flash_attention</name><anchor>diffusers.ModelMixin.enable_xla_flash_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L370</source><parameters>[{"name": "partition_spec", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

Enable the flash attention pallals kernel for torch_xla.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.ModelMixin.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L806</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **torch_dtype** (`torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info** (`bool`, *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only(`bool`,** *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **from_flax** (`bool`, *optional*, defaults to `False`) --
  Load the model weights from a Flax checkpoint save file.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`Union[int, str, torch.device]` or `Dict[str, Union[int, str, torch.device]]`, *optional*) --
  A map that specifies where each submodule should go. It doesn't need to be defined for each
  parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
  same device. Defaults to `None`, meaning that the model will be loaded on CPU.

  Examples:

```py
>>> from diffusers import AutoModel
>>> import torch

>>> # This works.
>>> model = AutoModel.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", device_map="cuda"
... )
>>> # This also works (integer accelerator device ID).
>>> model = AutoModel.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", device_map=0
... )
>>> # Specifying a supported offloading strategy like "auto" also works.
>>> model = AutoModel.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", device_map="auto"
... )
>>> # Specifying a dictionary as `device_map` also works.
>>> model = AutoModel.from_pretrained(
...     "stabilityai/stable-diffusion-xl-base-1.0",
...     subfolder="unet",
...     device_map={"": torch.device("cuda")},
... )
```

  Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
  more information about each option see [designing a device
  map](https://huggingface.co/docs/accelerate/en/concept_guides/big_model_inference#the-devicemap). You
  can also refer to the [Diffusers-specific
  documentation](https://huggingface.co/docs/diffusers/main/en/training/distributed_inference#model-sharding)
  for more concrete examples.
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if `device_map` contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **variant** (`str`, *optional*) --
  Load weights from a specified `variant` filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the `safetensors` weights are downloaded if they're available **and** if the
  `safetensors` library is installed. If set to `True`, the model is forcibly loaded from `safetensors`
  weights. If set to `False`, `safetensors` weights are not loaded.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive, which may not handle the seeky-ness of mmap very well.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a pretrained PyTorch model from a pretrained model configuration.

The model is set in evaluation mode - `model.eval()` - by default, and dropout modules are deactivated. To
train the model, set it back in training mode with `model.train()`.



> [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`. You can also activate the special >
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a >
firewalled environment.

<ExampleCodeBlock anchor="diffusers.ModelMixin.from_pretrained.example">

Example:

```py
from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="diffusers.ModelMixin.from_pretrained.example-2">

If you get the error message below, you need to finetune the weights for your downstream task:

```bash
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_memory_footprint</name><anchor>diffusers.ModelMixin.get_memory_footprint</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L1849</source><parameters>[{"name": "return_buffers", "val": " = True"}]</parameters><paramsdesc>- **return_buffers** (`bool`, *optional*, defaults to `True`) --
  Whether to return the size of the buffer tensors in the computation of the memory footprint. Buffers
  are tensors that do not require gradients and not registered as parameters. E.g. mean and std in batch
  norm layers. Please see: https://discuss.pytorch.org/t/what-pytorch-means-by-buffers/120266/2</paramsdesc><paramgroups>0</paramgroups></docstring>

Get the memory footprint of a model. This will return the memory footprint of the current model in bytes.
Useful to benchmark the memory footprint of the current model and design some tests. Solution inspired from the
PyTorch discussions: https://discuss.pytorch.org/t/gpu-memory-that-model-uses/56822/2




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>num_parameters</name><anchor>diffusers.ModelMixin.num_parameters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L1785</source><parameters>[{"name": "only_trainable", "val": ": bool = False"}, {"name": "exclude_embeddings", "val": ": bool = False"}]</parameters><paramsdesc>- **only_trainable** (`bool`, *optional*, defaults to `False`) --
  Whether or not to return only the number of trainable parameters.
- **exclude_embeddings** (`bool`, *optional*, defaults to `False`) --
  Whether or not to return only the number of non-embedding parameters.</paramsdesc><paramgroups>0</paramgroups><rettype>`int`</rettype><retdesc>The number of parameters.</retdesc></docstring>

Get number of (trainable or non-embedding) parameters in the module.







<ExampleCodeBlock anchor="diffusers.ModelMixin.num_parameters.example">

Example:

```py
from diffusers import UNet2DConditionModel

model_id = "runwayml/stable-diffusion-v1-5"
unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet")
unet.num_parameters(only_trainable=True)
859520964
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>reset_attention_backend</name><anchor>diffusers.ModelMixin.reset_attention_backend</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L620</source><parameters>[]</parameters></docstring>

Resets the attention backend for the model. Following calls to `forward` will use the environment default, if
set, or the torch native scaled dot product attention.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>diffusers.ModelMixin.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L639</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "save_function", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "max_shard_size", "val": ": typing.Union[int, str] = '10GB'"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save a model and its configuration file to. Will be created if it doesn't exist.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **variant** (`str`, *optional*) --
  If specified, weights are saved in the format `pytorch_model.<variant>.bin`.
- **max_shard_size** (`int` or `str`, defaults to `"10GB"`) --
  The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size
  lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5GB"`).
  If expressed as an integer, the unit is bytes. Note that this limit will be decreased after a certain
  period of time (starting from Oct 2024) to allow users to upgrade to the latest version of `diffusers`.
  This is to establish a common default size for this argument across different libraries in the Hugging
  Face ecosystem (`transformers`, and `accelerate`, for example).
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
  repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
  namespace).
- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save a model and its configuration file to a directory so that it can be reloaded using the
[from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) class method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_backend</name><anchor>diffusers.ModelMixin.set_attention_backend</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L585</source><parameters>[{"name": "backend", "val": ": str"}]</parameters><paramsdesc>- **backend** (`str`) --
  The name of the backend to set. Must be one of the available backends defined in
  `AttentionBackendName`. Available backends can be found in
  `diffusers.attention_dispatch.AttentionBackendName`. Defaults to torch native scaled dot product
  attention as backend.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the attention backend for the model.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_use_npu_flash_attention</name><anchor>diffusers.ModelMixin.set_use_npu_flash_attention</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_utils.py#L323</source><parameters>[{"name": "valid", "val": ": bool"}]</parameters></docstring>

Set the switch for the npu flash attention.


</div></div>

## PushToHubMixin[[diffusers.utils.PushToHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.utils.PushToHubMixin</name><anchor>diffusers.utils.PushToHubMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L464</source><parameters>[]</parameters></docstring>

A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>diffusers.utils.PushToHubMixin.push_to_hub</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L499</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": bool = False"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository you want to push your model, scheduler, or pipeline files to. It should
  contain your organization name when pushing to an organization. `repo_id` can also be a path to a local
  directory.
- **commit_message** (`str`, *optional*) --
  Message to commit while pushing. Default to `"Upload {object}"`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the
  organization's default is private. This value is ignored if the repo already exists.
- **token** (`str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. The token generated when running `hf
  auth login` (stored in `~/.huggingface`).
- **create_pr** (`bool`, *optional*, defaults to `False`) --
  Whether or not to create a PR with the uploaded files or directly commit.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether or not to convert the model weights to the `safetensors` format.
- **variant** (`str`, *optional*) --
  If specified, weights are saved in the format `pytorch_model.<variant>.bin`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub.



<ExampleCodeBlock anchor="diffusers.utils.PushToHubMixin.push_to_hub.example">

Examples:

```python
from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet")

# Push the `unet` to your namespace with the name "my-finetuned-unet".
unet.push_to_hub("my-finetuned-unet")

# Push the `unet` to an organization with the name "my-finetuned-unet".
unet.push_to_hub("your-org/my-finetuned-unet")
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/overview.md" />

### HunyuanVideoTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/hunyuan_video_transformer_3d.md

# HunyuanVideoTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [HunyuanVideo: A Systematic Framework For Large Video Generative Models](https://huggingface.co/papers/2412.03603) by Tencent.

The model can be loaded with the following code snippet.

```python
from diffusers import HunyuanVideoTransformer3DModel

transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## HunyuanVideoTransformer3DModel[[diffusers.HunyuanVideoTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanVideoTransformer3DModel</name><anchor>diffusers.HunyuanVideoTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_hunyuan_video.py#L822</source><parameters>[{"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_layers", "val": ": int = 20"}, {"name": "num_single_layers", "val": ": int = 40"}, {"name": "num_refiner_layers", "val": ": int = 2"}, {"name": "mlp_ratio", "val": ": float = 4.0"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "patch_size_t", "val": ": int = 1"}, {"name": "qk_norm", "val": ": str = 'rms_norm'"}, {"name": "guidance_embeds", "val": ": bool = True"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "pooled_projection_dim", "val": ": int = 768"}, {"name": "rope_theta", "val": ": float = 256.0"}, {"name": "rope_axes_dim", "val": ": typing.Tuple[int] = (16, 56, 56)"}, {"name": "image_condition_type", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **num_attention_heads** (`int`, defaults to `24`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of channels in each head.
- **num_layers** (`int`, defaults to `20`) --
  The number of layers of dual-stream blocks to use.
- **num_single_layers** (`int`, defaults to `40`) --
  The number of layers of single-stream blocks to use.
- **num_refiner_layers** (`int`, defaults to `2`) --
  The number of layers of refiner blocks to use.
- **mlp_ratio** (`float`, defaults to `4.0`) --
  The ratio of the hidden layer size to the input size in the feedforward network.
- **patch_size** (`int`, defaults to `2`) --
  The size of the spatial patches to use in the patch embedding layer.
- **patch_size_t** (`int`, defaults to `1`) --
  The size of the tmeporal patches to use in the patch embedding layer.
- **qk_norm** (`str`, defaults to `rms_norm`) --
  The normalization to use for the query and key projections in the attention layers.
- **guidance_embeds** (`bool`, defaults to `True`) --
  Whether to use guidance embeddings in the model.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **pooled_projection_dim** (`int`, defaults to `768`) --
  The dimension of the pooled projection of the text embeddings.
- **rope_theta** (`float`, defaults to `256.0`) --
  The value of theta to use in the RoPE layer.
- **rope_axes_dim** (`Tuple[int]`, defaults to `(16, 56, 56)`) --
  The dimensions of the axes to use in the RoPE layer.
- **image_condition_type** (`str`, *optional*, defaults to `None`) --
  The type of image conditioning to use. If `None`, no image conditioning is used. If `latent_concat`, the
  image is concatenated to the latent stream. If `token_replace`, the image is used to replace first-frame
  tokens in the latent stream and apply conditioning.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data used in [HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.HunyuanVideoTransformer3DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_hunyuan_video.py#L997</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/hunyuan_video_transformer_3d.md" />

### AutoencoderDC
https://huggingface.co/docs/diffusers/main/api/models/autoencoder_dc.md

# AutoencoderDC

The 2D Autoencoder model used in [SANA](https://huggingface.co/papers/2410.10629) and introduced in [DCAE](https://huggingface.co/papers/2410.10733) by authors Junyu Chen\*, Han Cai\*, Junsong Chen, Enze Xie, Shang Yang, Haotian Tang, Muyang Li, Yao Lu, Song Han from MIT HAN Lab.

The abstract from the paper is:

*We present Deep Compression Autoencoder (DC-AE), a new family of autoencoder models for accelerating high-resolution diffusion models. Existing autoencoder models have demonstrated impressive results at a moderate spatial compression ratio (e.g., 8x), but fail to maintain satisfactory reconstruction accuracy for high spatial compression ratios (e.g., 64x). We address this challenge by introducing two key techniques: (1) Residual Autoencoding, where we design our models to learn residuals based on the space-to-channel transformed features to alleviate the optimization difficulty of high spatial-compression autoencoders; (2) Decoupled High-Resolution Adaptation, an efficient decoupled three-phases training strategy for mitigating the generalization penalty of high spatial-compression autoencoders. With these designs, we improve the autoencoder's spatial compression ratio up to 128 while maintaining the reconstruction quality. Applying our DC-AE to latent diffusion models, we achieve significant speedup without accuracy drop. For example, on ImageNet 512x512, our DC-AE provides 19.1x inference speedup and 17.9x training speedup on H100 GPU for UViT-H while achieving a better FID, compared with the widely used SD-VAE-f8 autoencoder. Our code is available at [this https URL](https://github.com/mit-han-lab/efficientvit).*

The following DCAE models are released and supported in Diffusers.

| Diffusers format | Original format |
|:----------------:|:---------------:|
| [`mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-sana-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0)
| [`mit-han-lab/dc-ae-f32c32-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0)
| [`mit-han-lab/dc-ae-f32c32-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0)
| [`mit-han-lab/dc-ae-f64c128-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f64c128-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-in-1.0)
| [`mit-han-lab/dc-ae-f64c128-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f64c128-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-mix-1.0)
| [`mit-han-lab/dc-ae-f128c512-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0)
| [`mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0)

This model was contributed by [lawrence-cj](https://github.com/lawrence-cj).

Load a model in Diffusers format with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained).

```python
from diffusers import AutoencoderDC

ae = AutoencoderDC.from_pretrained("mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers", torch_dtype=torch.float32).to("cuda")
```

## Load a model in Diffusers via `from_single_file`

```python
from difusers import AutoencoderDC

ckpt_path = "https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0/blob/main/model.safetensors"
model = AutoencoderDC.from_single_file(ckpt_path) 

```

The `AutoencoderDC` model has `in` and `mix` single file checkpoint variants that have matching checkpoint keys, but use different scaling factors. It is not possible for Diffusers to automatically infer the correct config file to use with the model based on just the checkpoint and will default to configuring the model using the `mix` variant config file. To override the automatically determined config, please use the `config` argument when using single file loading with `in` variant checkpoints. 

```python
from diffusers import AutoencoderDC

ckpt_path = "https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0/blob/main/model.safetensors"
model = AutoencoderDC.from_single_file(ckpt_path, config="mit-han-lab/dc-ae-f128c512-in-1.0-diffusers")
```


## AutoencoderDC[[diffusers.AutoencoderDC]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderDC</name><anchor>diffusers.AutoencoderDC</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_dc.py#L381</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "latent_channels", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 32"}, {"name": "encoder_block_types", "val": ": typing.Union[str, typing.Tuple[str]] = 'ResBlock'"}, {"name": "decoder_block_types", "val": ": typing.Union[str, typing.Tuple[str]] = 'ResBlock'"}, {"name": "encoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512, 1024, 1024)"}, {"name": "decoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512, 1024, 1024)"}, {"name": "encoder_layers_per_block", "val": ": typing.Tuple[int] = (2, 2, 2, 3, 3, 3)"}, {"name": "decoder_layers_per_block", "val": ": typing.Tuple[int] = (3, 3, 3, 3, 3, 3)"}, {"name": "encoder_qkv_multiscales", "val": ": typing.Tuple[typing.Tuple[int, ...], ...] = ((), (), (), (5,), (5,), (5,))"}, {"name": "decoder_qkv_multiscales", "val": ": typing.Tuple[typing.Tuple[int, ...], ...] = ((), (), (), (5,), (5,), (5,))"}, {"name": "upsample_block_type", "val": ": str = 'pixel_shuffle'"}, {"name": "downsample_block_type", "val": ": str = 'pixel_unshuffle'"}, {"name": "decoder_norm_types", "val": ": typing.Union[str, typing.Tuple[str]] = 'rms_norm'"}, {"name": "decoder_act_fns", "val": ": typing.Union[str, typing.Tuple[str]] = 'silu'"}, {"name": "encoder_out_shortcut", "val": ": bool = True"}, {"name": "decoder_in_shortcut", "val": ": bool = True"}, {"name": "decoder_conv_act_fn", "val": ": str = 'relu'"}, {"name": "scaling_factor", "val": ": float = 1.0"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `3`) --
  The number of input channels in samples.
- **latent_channels** (`int`, defaults to `32`) --
  The number of channels in the latent space representation.
- **encoder_block_types** (`Union[str, Tuple[str]]`, defaults to `"ResBlock"`) --
  The type(s) of block to use in the encoder.
- **decoder_block_types** (`Union[str, Tuple[str]]`, defaults to `"ResBlock"`) --
  The type(s) of block to use in the decoder.
- **encoder_block_out_channels** (`Tuple[int, ...]`, defaults to `(128, 256, 512, 512, 1024, 1024)`) --
  The number of output channels for each block in the encoder.
- **decoder_block_out_channels** (`Tuple[int, ...]`, defaults to `(128, 256, 512, 512, 1024, 1024)`) --
  The number of output channels for each block in the decoder.
- **encoder_layers_per_block** (`Tuple[int]`, defaults to `(2, 2, 2, 3, 3, 3)`) --
  The number of layers per block in the encoder.
- **decoder_layers_per_block** (`Tuple[int]`, defaults to `(3, 3, 3, 3, 3, 3)`) --
  The number of layers per block in the decoder.
- **encoder_qkv_multiscales** (`Tuple[Tuple[int, ...], ...]`, defaults to `((), (), (), (5,), (5,), (5,))`) --
  Multi-scale configurations for the encoder's QKV (query-key-value) transformations.
- **decoder_qkv_multiscales** (`Tuple[Tuple[int, ...], ...]`, defaults to `((), (), (), (5,), (5,), (5,))`) --
  Multi-scale configurations for the decoder's QKV (query-key-value) transformations.
- **upsample_block_type** (`str`, defaults to `"pixel_shuffle"`) --
  The type of block to use for upsampling in the decoder.
- **downsample_block_type** (`str`, defaults to `"pixel_unshuffle"`) --
  The type of block to use for downsampling in the encoder.
- **decoder_norm_types** (`Union[str, Tuple[str]]`, defaults to `"rms_norm"`) --
  The normalization type(s) to use in the decoder.
- **decoder_act_fns** (`Union[str, Tuple[str]]`, defaults to `"silu"`) --
  The activation function(s) to use in the decoder.
- **encoder_out_shortcut**  (`bool`, defaults to `True`) --
  Whether to use shortcut at the end of the encoder.
- **decoder_in_shortcut** (`bool`, defaults to `True`) --
  Whether to use shortcut at the beginning of the decoder.
- **decoder_conv_act_fn** (`str`, defaults to `"relu"`) --
  The activation function to use at the end of the decoder.
- **scaling_factor** (`float`, defaults to `1.0`) --
  The multiplicative inverse of the root mean square of the latent features. This is used to scale the latent
  space to have unit variance when training the diffusion model. The latents are scaled with the formula `z =
  z * scaling_factor` before being passed to the diffusion model. When decoding, the latents are scaled back
  to the original scale with the formula: `z = 1 / scaling_factor * z`.</paramsdesc><paramgroups>0</paramgroups></docstring>

An Autoencoder model introduced in [DCAE](https://huggingface.co/papers/2410.10733) and used in
[SANA](https://huggingface.co/papers/2410.10629).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderDC.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderDC.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderDC.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_dc.py#L553</source><parameters>[]</parameters></docstring>

Disable sliced AE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderDC.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_dc.py#L539</source><parameters>[]</parameters></docstring>

Disable tiled AE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderDC.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_dc.py#L546</source><parameters>[]</parameters></docstring>

Enable sliced AE decoding. When this option is enabled, the AE will split the input tensor in slices to compute
decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderDC.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_dc.py#L507</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled AE decoding. When this option is enabled, the AE will split the input tensor into tiles to compute
decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div></div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoder_dc.md" />

### TransformerTemporalModel
https://huggingface.co/docs/diffusers/main/api/models/transformer_temporal.md

# TransformerTemporalModel

A Transformer model for video-like data.

## TransformerTemporalModel[[diffusers.TransformerTemporalModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TransformerTemporalModel</name><anchor>diffusers.TransformerTemporalModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_temporal.py#L41</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 88"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 1"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "attention_bias", "val": ": bool = False"}, {"name": "sample_size", "val": ": typing.Optional[int] = None"}, {"name": "activation_fn", "val": ": str = 'geglu'"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "double_self_attention", "val": ": bool = True"}, {"name": "positional_embeddings", "val": ": typing.Optional[str] = None"}, {"name": "num_positional_embeddings", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, *optional*, defaults to 16) -- The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, *optional*, defaults to 88) -- The number of channels in each head.
- **in_channels** (`int`, *optional*) --
  The number of channels in the input and output (specify if the input is **continuous**).
- **num_layers** (`int`, *optional*, defaults to 1) -- The number of layers of Transformer blocks to use.
- **dropout** (`float`, *optional*, defaults to 0.0) -- The dropout probability to use.
- **cross_attention_dim** (`int`, *optional*) -- The number of `encoder_hidden_states` dimensions to use.
- **attention_bias** (`bool`, *optional*) --
  Configure if the `TransformerBlock` attention should contain a bias parameter.
- **sample_size** (`int`, *optional*) -- The width of the latent images (specify if the input is **discrete**).
  This is fixed during training since it is used to learn a number of position embeddings.
- **activation_fn** (`str`, *optional*, defaults to `"geglu"`) --
  Activation function to use in feed-forward. See `diffusers.models.activations.get_activation` for supported
  activation functions.
- **norm_elementwise_affine** (`bool`, *optional*) --
  Configure if the `TransformerBlock` should use learnable elementwise affine parameters for normalization.
- **double_self_attention** (`bool`, *optional*) --
  Configure if each `TransformerBlock` should contain two self-attention layers.
- **positional_embeddings** -- (`str`, *optional*):
  The type of positional embeddings to apply to the sequence input before passing use.
- **num_positional_embeddings** -- (`int`, *optional*):
  The maximum length of the sequence over which to apply positional embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.TransformerTemporalModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_temporal.py#L123</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "class_labels", "val": ": LongTensor = None"}, {"name": "num_frames", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.Tensor` of shape `(batch size, channel, height, width)` if continuous) --
  Input hidden_states.
- **encoder_hidden_states** ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*) --
  Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
  self-attention.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **class_labels** ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*) --
  Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
  `AdaLayerZeroNorm`.
- **num_frames** (`int`, *optional*, defaults to 1) --
  The number of frames to be processed per batch. This is used to reshape the hidden states.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TransformerTemporalModelOutput](/docs/diffusers/main/en/api/models/transformer_temporal#diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput)
  instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[TransformerTemporalModelOutput](/docs/diffusers/main/en/api/models/transformer_temporal#diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an
[TransformerTemporalModelOutput](/docs/diffusers/main/en/api/models/transformer_temporal#diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput) is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The `TransformerTemporal` forward method.








</div></div>

## TransformerTemporalModelOutput[[diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput</name><anchor>diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_temporal.py#L29</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size x num_frames, num_channels, height, width)`) --
  The hidden states output conditioned on `encoder_hidden_states` input.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [TransformerTemporalModel](/docs/diffusers/main/en/api/models/transformer_temporal#diffusers.TransformerTemporalModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/transformer_temporal.md" />

### SparseControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_sparsectrl.md

# SparseControlNetModel

SparseControlNetModel is an implementation of ControlNet for [AnimateDiff](https://huggingface.co/papers/2307.04725).

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

The SparseCtrl version of ControlNet was introduced in [SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models](https://huggingface.co/papers/2311.16933) for achieving controlled generation in text-to-video diffusion models by Yuwei Guo, Ceyuan Yang, Anyi Rao, Maneesh Agrawala, Dahua Lin, and Bo Dai.

The abstract from the paper is:

*The development of text-to-video (T2V), i.e., generating videos with a given text prompt, has been significantly advanced in recent years. However, relying solely on text prompts often results in ambiguous frame composition due to spatial uncertainty. The research community thus leverages the dense structure signals, e.g., per-frame depth/edge sequences, to enhance controllability, whose collection accordingly increases the burden of inference. In this work, we present SparseCtrl to enable flexible structure control with temporally sparse signals, requiring only one or a few inputs, as shown in Figure 1. It incorporates an additional condition encoder to process these sparse signals while leaving the pre-trained T2V model untouched. The proposed approach is compatible with various modalities, including sketches, depth maps, and RGB images, providing more practical control for video generation and promoting applications such as storyboarding, depth rendering, keyframe animation, and interpolation. Extensive experiments demonstrate the generalization of SparseCtrl on both original and personalized T2V generators. Codes and models will be publicly available at [this https URL](https://guoyww.github.io/projects/SparseCtrl).*

## Example for loading SparseControlNetModel

```python
import torch
from diffusers import SparseControlNetModel

# fp32 variant in float16
# 1. Scribble checkpoint
controlnet = SparseControlNetModel.from_pretrained("guoyww/animatediff-sparsectrl-scribble", torch_dtype=torch.float16)

# 2. RGB checkpoint
controlnet = SparseControlNetModel.from_pretrained("guoyww/animatediff-sparsectrl-rgb", torch_dtype=torch.float16)

# For loading fp16 variant, pass `variant="fp16"` as an additional parameter
```

## SparseControlNetModel[[diffusers.SparseControlNetModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SparseControlNetModel</name><anchor>diffusers.SparseControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L96</source><parameters>[{"name": "in_channels", "val": ": int = 4"}, {"name": "conditioning_channels", "val": ": int = 4"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion')"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 768"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "transformer_layers_per_mid_block", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "temporal_transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...], NoneType] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "global_pool_conditions", "val": ": bool = False"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "motion_max_seq_length", "val": ": int = 32"}, {"name": "motion_num_attention_heads", "val": ": int = 8"}, {"name": "concat_conditioning_mask", "val": ": bool = True"}, {"name": "use_simplified_condition_embedding", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to 4) --
  The number of channels in the input sample.
- **conditioning_channels** (`int`, defaults to 4) --
  The number of input channels in the controlnet conditional embedding module. If
  `concat_condition_embedding` is True, the value provided here is incremented by 1.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, defaults to 0) --
  The frequency shift to apply to the time embedding.
- **down_block_types** (`tuple[str]`, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
  The tuple of downsample blocks to use.
- **only_cross_attention** (`Union[bool, Tuple[bool]]`, defaults to `False`) --
- **block_out_channels** (`tuple[int]`, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, defaults to 2) --
  The number of layers per block.
- **downsample_padding** (`int`, defaults to 1) --
  The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, defaults to 1) --
  The scale factor to use for the mid block.
- **act_fn** (`str`, defaults to "silu") --
  The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) --
  The number of groups to use for the normalization. If None, normalization and activation layers is skipped
  in post-processing.
- **norm_eps** (`float`, defaults to 1e-5) --
  The epsilon to use for the normalization.
- **cross_attention_dim** (`int`, defaults to 1280) --
  The dimension of the cross attention features.
- **transformer_layers_per_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
  The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
  `~models.unet_2d_blocks.CrossAttnDownBlock2D`, `~models.unet_2d_blocks.CrossAttnUpBlock2D`,
  `~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`.
- **transformer_layers_per_mid_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
  The number of transformer layers to use in each layer in the middle block.
- **attention_head_dim** (`int` or `Tuple[int]`, defaults to 8) --
  The dimension of the attention heads.
- **num_attention_heads** (`int` or `Tuple[int]`, *optional*) --
  The number of heads to use for multi-head attention.
- **use_linear_projection** (`bool`, defaults to `False`) --
- **upcast_attention** (`bool`, defaults to `False`) --
- **resnet_time_scale_shift** (`str`, defaults to `"default"`) --
  Time scale shift config for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **conditioning_embedding_out_channels** (`Tuple[int]`, defaults to `(16, 32, 96, 256)`) --
  The tuple of output channel for each block in the `conditioning_embedding` layer.
- **global_pool_conditions** (`bool`, defaults to `False`) --
  TODO(Patrick) - unused parameter
- **controlnet_conditioning_channel_order** (`str`, defaults to `rgb`) --
- **motion_max_seq_length** (`int`, defaults to `32`) --
  The maximum sequence length to use in the motion module.
- **motion_num_attention_heads** (`int` or `Tuple[int]`, defaults to `8`) --
  The number of heads to use in each attention layer of the motion module.
- **concat_conditioning_mask** (`bool`, defaults to `True`) --
- **use_simplified_condition_embedding** (`bool`, defaults to `True`) --</paramsdesc><paramgroups>0</paramgroups></docstring>

A SparseControlNet model as described in [SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion
Models](https://huggingface.co/papers/2311.16933).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.SparseControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L593</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "conditioning_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor.
- **timestep** (`Union[torch.Tensor, float, int]`) --
  The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states.
- **controlnet_cond** (`torch.Tensor`) --
  The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
  The scale factor for ControlNet outputs.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** (`torch.Tensor`, *optional*, defaults to `None`) --
  Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the
  timestep_embedding passed through the `self.time_embedding` layer to obtain the final timestep
  embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **added_cond_kwargs** (`dict`) --
  Additional conditions for the Stable Diffusion XL UNet.
- **cross_attention_kwargs** (`dict[str]`, *optional*, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor`.
- **guess_mode** (`bool`, defaults to `False`) --
  In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if
  you remove all prompts. A `guidance_scale` between 3.0 and 5.0 is recommended.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `ControlNetOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`ControlNetOutput` **or** `tuple`</rettype><retdesc>If `return_dict` is `True`, a `ControlNetOutput` is returned, otherwise a tuple is
returned where the first element is the sample tensor.</retdesc></docstring>

The [SparseControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_unet</name><anchor>diffusers.SparseControlNetModel.from_unet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L387</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "load_weights_from_unet", "val": ": bool = True"}, {"name": "conditioning_channels", "val": ": int = 3"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  The UNet model weights to copy to the [SparseControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel). All configuration options are also
  copied where applicable.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [SparseControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sparsectrl#diffusers.SparseControlNetModel) from [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_slice</name><anchor>diffusers.SparseControlNetModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L528</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]]"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
  When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
  `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation.

When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.SparseControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L477</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.SparseControlNetModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sparsectrl.py#L512</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

## SparseControlNetOutput[[diffusers.models.controlnet_sparsectrl.SparseControlNetOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.controlnet_sparsectrl.SparseControlNetOutput</name><anchor>diffusers.models.controlnet_sparsectrl.SparseControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnet_sparsectrl.py#L30</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_sparsectrl.md" />

### PriorTransformer
https://huggingface.co/docs/diffusers/main/api/models/prior_transformer.md

# PriorTransformer

The Prior Transformer was originally introduced in [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://huggingface.co/papers/2204.06125) by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process.

The abstract from the paper is:

*Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.*

## PriorTransformer[[diffusers.PriorTransformer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PriorTransformer</name><anchor>diffusers.PriorTransformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/prior_transformer.py#L36</source><parameters>[{"name": "num_attention_heads", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "num_layers", "val": ": int = 20"}, {"name": "embedding_dim", "val": ": int = 768"}, {"name": "num_embeddings", "val": " = 77"}, {"name": "additional_embeddings", "val": " = 4"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "time_embed_act_fn", "val": ": str = 'silu'"}, {"name": "norm_in_type", "val": ": typing.Optional[str] = None"}, {"name": "embedding_proj_norm_type", "val": ": typing.Optional[str] = None"}, {"name": "encoder_hid_proj_type", "val": ": typing.Optional[str] = 'linear'"}, {"name": "added_emb_type", "val": ": typing.Optional[str] = 'prd'"}, {"name": "time_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "embedding_proj_dim", "val": ": typing.Optional[int] = None"}, {"name": "clip_embed_dim", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, *optional*, defaults to 32) -- The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, *optional*, defaults to 64) -- The number of channels in each head.
- **num_layers** (`int`, *optional*, defaults to 20) -- The number of layers of Transformer blocks to use.
- **embedding_dim** (`int`, *optional*, defaults to 768) -- The dimension of the model input `hidden_states`
- **num_embeddings** (`int`, *optional*, defaults to 77) --
  The number of embeddings of the model input `hidden_states`
- **additional_embeddings** (`int`, *optional*, defaults to 4) -- The number of additional tokens appended to the
  projected `hidden_states`. The actual length of the used `hidden_states` is `num_embeddings +
  additional_embeddings`.
- **dropout** (`float`, *optional*, defaults to 0.0) -- The dropout probability to use.
- **time_embed_act_fn** (`str`, *optional*, defaults to 'silu') --
  The activation function to use to create timestep embeddings.
- **norm_in_type** (`str`, *optional*, defaults to None) -- The normalization layer to apply on hidden states before
  passing to Transformer blocks. Set it to `None` if normalization is not needed.
- **embedding_proj_norm_type** (`str`, *optional*, defaults to None) --
  The normalization layer to apply on the input `proj_embedding`. Set it to `None` if normalization is not
  needed.
- **encoder_hid_proj_type** (`str`, *optional*, defaults to `linear`) --
  The projection layer to apply on the input `encoder_hidden_states`. Set it to `None` if
  `encoder_hidden_states` is `None`.
- **added_emb_type** (`str`, *optional*, defaults to `prd`) -- Additional embeddings to condition the model.
  Choose from `prd` or `None`. if choose `prd`, it will prepend a token indicating the (quantized) dot
  product between the text embedding and image embedding as proposed in the unclip paper
  https://huggingface.co/papers/2204.06125 If it is `None`, no additional embeddings will be prepended.
- **time_embed_dim** (`int, *optional*, defaults to None) -- The dimension of timestep embeddings.
  If None, will be set to `num_attention_heads * attention_head_dim`
- **embedding_proj_dim** (`int`, *optional*, default to None) --
  The dimension of `proj_embedding`. If None, will be set to `embedding_dim`.
- **clip_embed_dim** (`int`, *optional*, default to None) --
  The dimension of the output. If None, will be set to `embedding_dim`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Prior Transformer model.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.PriorTransformer.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/prior_transformer.py#L245</source><parameters>[{"name": "hidden_states", "val": ""}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "proj_embedding", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.BoolTensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch_size, embedding_dim)`) --
  The currently predicted image embeddings.
- **timestep** (`torch.LongTensor`) --
  Current denoising step.
- **proj_embedding** (`torch.Tensor` of shape `(batch_size, embedding_dim)`) --
  Projected embedding vector the denoising process is conditioned on.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch_size, num_embeddings, embedding_dim)`) --
  Hidden states of the text embeddings the denoising process is conditioned on.
- **attention_mask** (`torch.BoolTensor` of shape `(batch_size, num_embeddings)`) --
  Text mask for the text embeddings.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [PriorTransformerOutput](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.models.transformers.prior_transformer.PriorTransformerOutput) instead of
  a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[PriorTransformerOutput](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.models.transformers.prior_transformer.PriorTransformerOutput) or `tuple`</rettype><retdesc>If return_dict is True, a [PriorTransformerOutput](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.models.transformers.prior_transformer.PriorTransformerOutput) is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

The [PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.PriorTransformer.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/prior_transformer.py#L195</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.PriorTransformer.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/prior_transformer.py#L230</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

## PriorTransformerOutput[[diffusers.models.transformers.prior_transformer.PriorTransformerOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.transformers.prior_transformer.PriorTransformerOutput</name><anchor>diffusers.models.transformers.prior_transformer.PriorTransformerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/prior_transformer.py#L24</source><parameters>[{"name": "predicted_image_embedding", "val": ": Tensor"}]</parameters><paramsdesc>- **predicted_image_embedding** (`torch.Tensor` of shape `(batch_size, embedding_dim)`) --
  The predicted CLIP image embedding conditioned on the CLIP text embedding input.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/prior_transformer.md" />

### LTXVideoTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/ltx_video_transformer3d.md

# LTXVideoTransformer3DModel

A Diffusion Transformer model for 3D data from [LTX](https://huggingface.co/Lightricks/LTX-Video) was introduced by Lightricks.

The model can be loaded with the following code snippet.

```python
from diffusers import LTXVideoTransformer3DModel

transformer = LTXVideoTransformer3DModel.from_pretrained("Lightricks/LTX-Video", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
```

## LTXVideoTransformer3DModel[[diffusers.LTXVideoTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LTXVideoTransformer3DModel</name><anchor>diffusers.LTXVideoTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_ltx.py#L385</source><parameters>[{"name": "in_channels", "val": ": int = 128"}, {"name": "out_channels", "val": ": int = 128"}, {"name": "patch_size", "val": ": int = 1"}, {"name": "patch_size_t", "val": ": int = 1"}, {"name": "num_attention_heads", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "cross_attention_dim", "val": ": int = 2048"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "qk_norm", "val": ": str = 'rms_norm_across_heads'"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "caption_channels", "val": ": int = 4096"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "attention_out_bias", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `128`) --
  The number of channels in the input.
- **out_channels** (`int`, defaults to `128`) --
  The number of channels in the output.
- **patch_size** (`int`, defaults to `1`) --
  The size of the spatial patches to use in the patch embedding layer.
- **patch_size_t** (`int`, defaults to `1`) --
  The size of the tmeporal patches to use in the patch embedding layer.
- **num_attention_heads** (`int`, defaults to `32`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **cross_attention_dim** (`int`, defaults to `2048 `) --
  The number of channels for cross attention heads.
- **num_layers** (`int`, defaults to `28`) --
  The number of layers of Transformer blocks to use.
- **activation_fn** (`str`, defaults to `"gelu-approximate"`) --
  Activation function to use in feed-forward.
- **qk_norm** (`str`, defaults to `"rms_norm_across_heads"`) --
  The normalization layer to use.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data used in [LTX](https://huggingface.co/Lightricks/LTX-Video).




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/ltx_video_transformer3d.md" />

### CosmosTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/cosmos_transformer3d.md

# CosmosTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [Cosmos World Foundation Model Platform for Physical AI](https://huggingface.co/papers/2501.03575) by NVIDIA.

The model can be loaded with the following code snippet.

```python
from diffusers import CosmosTransformer3DModel

transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## CosmosTransformer3DModel[[diffusers.CosmosTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CosmosTransformer3DModel</name><anchor>diffusers.CosmosTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_cosmos.py#L387</source><parameters>[{"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "num_attention_heads", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "mlp_ratio", "val": ": float = 4.0"}, {"name": "text_embed_dim", "val": ": int = 1024"}, {"name": "adaln_lora_dim", "val": ": int = 256"}, {"name": "max_size", "val": ": typing.Tuple[int, int, int] = (128, 240, 240)"}, {"name": "patch_size", "val": ": typing.Tuple[int, int, int] = (1, 2, 2)"}, {"name": "rope_scale", "val": ": typing.Tuple[float, float, float] = (2.0, 1.0, 1.0)"}, {"name": "concat_padding_mask", "val": ": bool = True"}, {"name": "extra_pos_embed_type", "val": ": typing.Optional[str] = 'learnable'"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **num_attention_heads** (`int`, defaults to `32`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of channels in each attention head.
- **num_layers** (`int`, defaults to `28`) --
  The number of layers of transformer blocks to use.
- **mlp_ratio** (`float`, defaults to `4.0`) --
  The ratio of the hidden layer size to the input size in the feedforward network.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **adaln_lora_dim** (`int`, defaults to `256`) --
  The hidden dimension of the Adaptive LayerNorm LoRA layer.
- **max_size** (`Tuple[int, int, int]`, defaults to `(128, 240, 240)`) --
  The maximum size of the input latent tensors in the temporal, height, and width dimensions.
- **patch_size** (`Tuple[int, int, int]`, defaults to `(1, 2, 2)`) --
  The patch size to use for patchifying the input latent tensors in the temporal, height, and width
  dimensions.
- **rope_scale** (`Tuple[float, float, float]`, defaults to `(2.0, 1.0, 1.0)`) --
  The scaling factor to use for RoPE in the temporal, height, and width dimensions.
- **concat_padding_mask** (`bool`, defaults to `True`) --
  Whether to concatenate the padding mask to the input latent tensors.
- **extra_pos_embed_type** (`str`, *optional*, defaults to `learnable`) --
  The type of extra positional embeddings to use. Can be one of `None` or `learnable`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data used in [Cosmos](https://github.com/NVIDIA/Cosmos).




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/cosmos_transformer3d.md" />

### CogView4Transformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/cogview4_transformer2d.md

# CogView4Transformer2DModel

A Diffusion Transformer model for 2D data from [CogView4]()

The model can be loaded with the following code snippet.

```python
from diffusers import CogView4Transformer2DModel

transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
```

## CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogView4Transformer2DModel</name><anchor>diffusers.CogView4Transformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_cogview4.py#L619</source><parameters>[{"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "num_layers", "val": ": int = 30"}, {"name": "attention_head_dim", "val": ": int = 40"}, {"name": "num_attention_heads", "val": ": int = 64"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "condition_dim", "val": ": int = 256"}, {"name": "pos_embed_max_size", "val": ": int = 128"}, {"name": "sample_size", "val": ": int = 128"}, {"name": "rope_axes_dim", "val": ": typing.Tuple[int, int] = (256, 256)"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **num_layers** (`int`, defaults to `30`) --
  The number of layers of Transformer blocks to use.
- **attention_head_dim** (`int`, defaults to `40`) --
  The number of channels in each head.
- **num_attention_heads** (`int`, defaults to `64`) --
  The number of heads to use for multi-head attention.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **time_embed_dim** (`int`, defaults to `512`) --
  Output dimension of timestep embeddings.
- **condition_dim** (`int`, defaults to `256`) --
  The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size,
  crop_coords).
- **pos_embed_max_size** (`int`, defaults to `128`) --
  The maximum resolution of the positional embeddings, from which slices of shape `H x W` are taken and added
  to input patched latents, where `H` and `W` are the latent height and width respectively. A value of 128
  means that the maximum supported height and width for image generation is `128 * vae_scale_factor *
  patch_size => 128 * 8 * 2 => 2048`.
- **sample_size** (`int`, defaults to `128`) --
  The base resolution of input latents. If height/width is not provided during generation, this value is used
  to determine the resolution as `sample_size * vae_scale_factor => 128 * 8 => 1024`</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/cogview4_transformer2d.md" />

### HunyuanDiT2DControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_hunyuandit.md

# HunyuanDiT2DControlNetModel

HunyuanDiT2DControlNetModel is an implementation of ControlNet for [Hunyuan-DiT](https://huggingface.co/papers/2405.08748).

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Hunyuan-DiT generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This code is implemented by Tencent Hunyuan Team. You can find pre-trained checkpoints for Hunyuan-DiT ControlNets on [Tencent Hunyuan](https://huggingface.co/Tencent-Hunyuan).

## Example For Loading HunyuanDiT2DControlNetModel

```py
from diffusers import HunyuanDiT2DControlNetModel
import torch
controlnet = HunyuanDiT2DControlNetModel.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Pose", torch_dtype=torch.float16)
```

## HunyuanDiT2DControlNetModel[[diffusers.HunyuanDiT2DControlNetModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanDiT2DControlNetModel</name><anchor>diffusers.HunyuanDiT2DControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_hunyuan.py#L41</source><parameters>[{"name": "conditioning_channels", "val": ": int = 3"}, {"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 88"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "sample_size", "val": " = 32"}, {"name": "hidden_size", "val": " = 1152"}, {"name": "transformer_num_layers", "val": ": int = 40"}, {"name": "mlp_ratio", "val": ": float = 4.0"}, {"name": "cross_attention_dim", "val": ": int = 1024"}, {"name": "cross_attention_dim_t5", "val": ": int = 2048"}, {"name": "pooled_projection_dim", "val": ": int = 1024"}, {"name": "text_len", "val": ": int = 77"}, {"name": "text_len_t5", "val": ": int = 256"}, {"name": "use_style_cond_and_image_meta_size", "val": ": bool = True"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.HunyuanDiT2DControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_hunyuan.py#L216</source><parameters>[{"name": "hidden_states", "val": ""}, {"name": "timestep", "val": ""}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "encoder_hidden_states", "val": " = None"}, {"name": "text_embedding_mask", "val": " = None"}, {"name": "encoder_hidden_states_t5", "val": " = None"}, {"name": "text_embedding_mask_t5", "val": " = None"}, {"name": "image_meta_size", "val": " = None"}, {"name": "style", "val": " = None"}, {"name": "image_rotary_emb", "val": " = None"}, {"name": "return_dict", "val": " = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch size, dim, height, width)`) --
  The input tensor.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step.
- **controlnet_cond** ( `torch.Tensor` ) --
  The conditioning input to ControlNet.
- **conditioning_scale** ( `float` ) --
  Indicate the conditioning scale.
- **encoder_hidden_states** ( `torch.Tensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. This is the output of `BertModel`.
- **text_embedding_mask** -- torch.Tensor
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. This is the output
  of `BertModel`.
- **encoder_hidden_states_t5** ( `torch.Tensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. This is the output of T5 Text Encoder.
- **text_embedding_mask_t5** -- torch.Tensor
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. This is the output
  of T5 Text Encoder.
- **image_meta_size** (torch.Tensor) --
  Conditional embedding indicate the image sizes
- **style** -- torch.Tensor:
  Conditional embedding indicate the style
- **image_rotary_emb** (`torch.Tensor`) --
  The image rotary embeddings to apply on query and key tensors during attention calculation.
- **return_dict** -- bool
  Whether to return a dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

The [HunyuanDiT2DControlNetModel](/docs/diffusers/main/en/api/models/controlnet_hunyuandit#diffusers.HunyuanDiT2DControlNetModel) forward method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.HunyuanDiT2DControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_hunyuan.py#L142</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the
  corresponding cross attention processor. This is strongly recommended when setting trainable attention
  processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_hunyuandit.md" />

### HiDreamImageTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/hidream_image_transformer.md

# HiDreamImageTransformer2DModel

A Transformer model for image-like data from [HiDream-I1](https://huggingface.co/HiDream-ai).

The model can be loaded with the following code snippet.

```python
from diffusers import HiDreamImageTransformer2DModel

transformer = HiDreamImageTransformer2DModel.from_pretrained("HiDream-ai/HiDream-I1-Full", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## Loading GGUF quantized checkpoints for HiDream-I1

GGUF checkpoints for the `HiDreamImageTransformer2DModel` can  be loaded using `~FromOriginalModelMixin.from_single_file`

```python
import torch
from diffusers import GGUFQuantizationConfig, HiDreamImageTransformer2DModel

ckpt_path = "https://huggingface.co/city96/HiDream-I1-Dev-gguf/blob/main/hidream-i1-dev-Q2_K.gguf"
transformer = HiDreamImageTransformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16
)
```

## HiDreamImageTransformer2DModel[[diffusers.HiDreamImageTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HiDreamImageTransformer2DModel</name><anchor>diffusers.HiDreamImageTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_hidream_image.py#L605</source><parameters>[{"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 16"}, {"name": "num_single_layers", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 20"}, {"name": "caption_channels", "val": ": typing.List[int] = None"}, {"name": "text_emb_dim", "val": ": int = 2048"}, {"name": "num_routed_experts", "val": ": int = 4"}, {"name": "num_activated_experts", "val": ": int = 2"}, {"name": "axes_dims_rope", "val": ": typing.Tuple[int, int] = (32, 32)"}, {"name": "max_resolution", "val": ": typing.Tuple[int, int] = (128, 128)"}, {"name": "llama_layers", "val": ": typing.List[int] = None"}, {"name": "force_inference_output", "val": ": bool = False"}]</parameters></docstring>


</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/hidream_image_transformer.md" />

### AutoencoderKLMochi
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_mochi.md

# AutoencoderKLMochi

The 3D variational autoencoder (VAE) model with KL loss used in [Mochi](https://github.com/genmoai/models) was introduced in [Mochi 1 Preview](https://huggingface.co/genmo/mochi-1-preview) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLMochi

vae = AutoencoderKLMochi.from_pretrained("genmo/mochi-1-preview", subfolder="vae", torch_dtype=torch.float32).to("cuda")
```

## AutoencoderKLMochi[[diffusers.AutoencoderKLMochi]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLMochi</name><anchor>diffusers.AutoencoderKLMochi</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L660</source><parameters>[{"name": "in_channels", "val": ": int = 15"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "encoder_block_out_channels", "val": ": typing.Tuple[int] = (64, 128, 256, 384)"}, {"name": "decoder_block_out_channels", "val": ": typing.Tuple[int] = (128, 256, 512, 768)"}, {"name": "latent_channels", "val": ": int = 12"}, {"name": "layers_per_block", "val": ": typing.Tuple[int, ...] = (3, 3, 4, 6, 3)"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "temporal_expansions", "val": ": typing.Tuple[int, ...] = (1, 2, 3)"}, {"name": "spatial_expansions", "val": ": typing.Tuple[int, ...] = (2, 2, 2)"}, {"name": "add_attention_block", "val": ": typing.Tuple[bool, ...] = (False, True, True, True, True)"}, {"name": "latents_mean", "val": ": typing.Tuple[float, ...] = (-0.06730895953510081, -0.038011381506090416, -0.07477820912866141, -0.05565264470995561, 0.012767231469026969, -0.04703542746246419, 0.043896967884726704, -0.09346305707025976, -0.09918314763016893, -0.008729793427399178, -0.011931556316503654, -0.0321993391887285)"}, {"name": "latents_std", "val": ": typing.Tuple[float, ...] = (0.9263795028493863, 0.9248894543193766, 0.9393059390890617, 0.959253732819592, 0.8244560132752793, 0.917259975397747, 0.9294154431013696, 1.3720942357788521, 0.881393668867029, 0.9168315692124348, 0.9185249279345552, 0.9274757570805041)"}, {"name": "scaling_factor", "val": ": float = 1.0"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int,  *optional*, defaults to 3) -- Number of channels in the output.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of block output channels.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **scaling_factor** (`float`, *optional*, defaults to `1.15258426`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.</paramsdesc><paramgroups>0</paramgroups></docstring>

A VAE model with KL loss for encoding images into latents and decoding latent representations into images. Used in
[Mochi 1 preview](https://github.com/genmoai/models).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLMochi.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLMochi.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L835</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLMochi.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L821</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLMochi.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L828</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLMochi.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L791</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLMochi.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L1037</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLMochi.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_mochi.py#L980</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.








</div></div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_mochi.md" />

### UNet2DConditionModel
https://huggingface.co/docs/diffusers/main/api/models/unet2d-cond.md

# UNet2DConditionModel

The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model.

The abstract from the paper is:

*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*

## UNet2DConditionModel[[diffusers.UNet2DConditionModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UNet2DConditionModel</name><anchor>diffusers.UNet2DConditionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L70</source><parameters>[{"name": "sample_size", "val": ": typing.Union[int, typing.Tuple[int, int], NoneType] = None"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "center_input_sample", "val": ": bool = False"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D')"}, {"name": "mid_block_type", "val": ": typing.Optional[str] = 'UNetMidBlock2DCrossAttn'"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D')"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": typing.Union[int, typing.Tuple[int]] = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": typing.Union[int, typing.Tuple[int]] = 1280"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple]] = 1"}, {"name": "reverse_transformer_layers_per_block", "val": ": typing.Optional[typing.Tuple[typing.Tuple[int]]] = None"}, {"name": "encoder_hid_dim", "val": ": typing.Optional[int] = None"}, {"name": "encoder_hid_dim_type", "val": ": typing.Optional[str] = None"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "dual_cross_attention", "val": ": bool = False"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "class_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_time_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "num_class_embeds", "val": ": typing.Optional[int] = None"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "resnet_skip_time_act", "val": ": bool = False"}, {"name": "resnet_out_scale_factor", "val": ": float = 1.0"}, {"name": "time_embedding_type", "val": ": str = 'positional'"}, {"name": "time_embedding_dim", "val": ": typing.Optional[int] = None"}, {"name": "time_embedding_act_fn", "val": ": typing.Optional[str] = None"}, {"name": "timestep_post_act", "val": ": typing.Optional[str] = None"}, {"name": "time_cond_proj_dim", "val": ": typing.Optional[int] = None"}, {"name": "conv_in_kernel", "val": ": int = 3"}, {"name": "conv_out_kernel", "val": ": int = 3"}, {"name": "projection_class_embeddings_input_dim", "val": ": typing.Optional[int] = None"}, {"name": "attention_type", "val": ": str = 'default'"}, {"name": "class_embeddings_concat", "val": ": bool = False"}, {"name": "mid_block_only_cross_attention", "val": ": typing.Optional[bool] = None"}, {"name": "cross_attention_norm", "val": ": typing.Optional[str] = None"}, {"name": "addition_embed_type_num_heads", "val": ": int = 64"}]</parameters><paramsdesc>- **sample_size** (`int` or `Tuple[int, int]`, *optional*, defaults to `None`) --
  Height and width of input/output sample.
- **in_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the input sample.
- **out_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the output.
- **center_input_sample** (`bool`, *optional*, defaults to `False`) -- Whether to center the input sample.
- **flip_sin_to_cos** (`bool`, *optional*, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, *optional*, defaults to 0) -- The frequency shift to apply to the time embedding.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
  The tuple of downsample blocks to use.
- **mid_block_type** (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`) --
  Block type for middle of UNet, it can be one of `UNetMidBlock2DCrossAttn`, `UNetMidBlock2D`, or
  `UNetMidBlock2DSimpleCrossAttn`. If `None`, the mid block layer is skipped.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`) --
  The tuple of upsample blocks to use.
- **only_cross_attention(`bool`** or `Tuple[bool]`, *optional*, default to `False`) --
  Whether to include self-attention in the basic transformer blocks, see
  `BasicTransformerBlock`.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, *optional*, defaults to 2) -- The number of layers per block.
- **downsample_padding** (`int`, *optional*, defaults to 1) -- The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, *optional*, defaults to 1.0) -- The scale factor to use for the mid block.
- **dropout** (`float`, *optional*, defaults to 0.0) -- The dropout probability to use.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) -- The number of groups to use for the normalization.
  If `None`, normalization and activation layers is skipped in post-processing.
- **norm_eps** (`float`, *optional*, defaults to 1e-5) -- The epsilon to use for the normalization.
- **cross_attention_dim** (`int` or `Tuple[int]`, *optional*, defaults to 1280) --
  The dimension of the cross attention features.
- **transformer_layers_per_block** (`int`, `Tuple[int]`, or `Tuple[Tuple]` , *optional*, defaults to 1) --
  The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
  `CrossAttnDownBlock2D`, `CrossAttnUpBlock2D`,
  `UNetMidBlock2DCrossAttn`.
- **reverse_transformer_layers_per_block**  -- (`Tuple[Tuple]`, *optional*, defaults to None):
  The number of transformer blocks of type `BasicTransformerBlock`, in the upsampling
  blocks of the U-Net. Only relevant if `transformer_layers_per_block` is of type `Tuple[Tuple]` and for
  `CrossAttnDownBlock2D`, `CrossAttnUpBlock2D`,
  `UNetMidBlock2DCrossAttn`.
- **encoder_hid_dim** (`int`, *optional*, defaults to None) --
  If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim`
  dimension to `cross_attention_dim`.
- **encoder_hid_dim_type** (`str`, *optional*, defaults to `None`) --
  If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text
  embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`.
- **attention_head_dim** (`int`, *optional*, defaults to 8) -- The dimension of the attention heads.
- **num_attention_heads** (`int`, *optional*) --
  The number of attention heads. If not defined, defaults to `attention_head_dim`
- **resnet_time_scale_shift** (`str`, *optional*, defaults to `"default"`) -- Time scale shift config
  for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **class_embed_type** (`str`, *optional*, defaults to `None`) --
  The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`,
  `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`.
- **addition_embed_type** (`str`, *optional*, defaults to `None`) --
  Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or
  "text". "text" will use the `TextTimeEmbedding` layer.
- **addition_time_embed_dim** -- (`int`, *optional*, defaults to `None`):
  Dimension for the timestep embeddings.
- **num_class_embeds** (`int`, *optional*, defaults to `None`) --
  Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing
  class conditioning with `class_embed_type` equal to `None`.
- **time_embedding_type** (`str`, *optional*, defaults to `positional`) --
  The type of position embedding to use for timesteps. Choose from `positional` or `fourier`.
- **time_embedding_dim** (`int`, *optional*, defaults to `None`) --
  An optional override for the dimension of the projected time embedding.
- **time_embedding_act_fn** (`str`, *optional*, defaults to `None`) --
  Optional activation function to use only once on the time embeddings before they are passed to the rest of
  the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`.
- **timestep_post_act** (`str`, *optional*, defaults to `None`) --
  The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`.
- **time_cond_proj_dim** (`int`, *optional*, defaults to `None`) --
  The dimension of `cond_proj` layer in the timestep embedding.
- **conv_in_kernel** (`int`, *optional*, default to `3`) -- The kernel size of `conv_in` layer.
- **conv_out_kernel** (`int`, *optional*, default to `3`) -- The kernel size of `conv_out` layer.
- **projection_class_embeddings_input_dim** (`int`, *optional*) -- The dimension of the `class_labels` input when
  `class_embed_type="projection"`. Required when `class_embed_type="projection"`.
- **class_embeddings_concat** (`bool`, *optional*, defaults to `False`) -- Whether to concatenate the time
  embeddings with the class embeddings.
- **mid_block_only_cross_attention** (`bool`, *optional*, defaults to `None`) --
  Whether to use cross attention with the mid block when using the `UNetMidBlock2DSimpleCrossAttn`. If
  `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the
  `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False`
  otherwise.</paramsdesc><paramgroups>0</paramgroups></docstring>

A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
shaped output.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_freeu</name><anchor>diffusers.UNet2DConditionModel.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L861</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_freeu</name><anchor>diffusers.UNet2DConditionModel.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L837</source><parameters>[{"name": "s1", "val": ": float"}, {"name": "s2", "val": ": float"}, {"name": "b1", "val": ": float"}, {"name": "b2", "val": ": float"}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism from https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stage blocks where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that
are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.UNet2DConditionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L1030</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Optional[typing.Dict[str, torch.Tensor]] = None"}, {"name": "down_block_additional_residuals", "val": ": typing.Optional[typing.Tuple[torch.Tensor]] = None"}, {"name": "mid_block_additional_residual", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "down_intrablock_additional_residuals", "val": ": typing.Optional[typing.Tuple[torch.Tensor]] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch, channel, height, width)`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** -- (`torch.Tensor`, *optional*, defaults to `None`):
  Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed
  through the `self.time_embedding` layer to obtain the timestep embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **added_cond_kwargs** -- (`dict`, *optional*):
  A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that
  are passed along to the UNet blocks.
- **down_block_additional_residuals** -- (`tuple` of `torch.Tensor`, *optional*):
  A tuple of tensors that if specified are added to the residuals of down unet blocks.
- **mid_block_additional_residual** -- (`torch.Tensor`, *optional*):
  A tensor that if specified is added to the residual of the middle unet block.
- **down_intrablock_additional_residuals** (`tuple` of `torch.Tensor`, *optional*) --
  additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s)
- **encoder_attention_mask** (`torch.Tensor`) --
  A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If
  `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias,
  which adds large negative values to the attention scores corresponding to "discard" tokens.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) is returned,
otherwise a `tuple` is returned where the first element is the sample tensor.</retdesc></docstring>

The [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.UNet2DConditionModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L869</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_slice</name><anchor>diffusers.UNet2DConditionModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L772</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]] = 'auto'"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
  When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
  `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation.

When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.UNet2DConditionModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L723</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.UNet2DConditionModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L757</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.UNet2DConditionModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L890</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## UNet2DConditionOutput[[diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput</name><anchor>diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py#L58</source><parameters>[{"name": "sample", "val": ": Tensor = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/unet2d-cond.md" />

### SanaControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_sana.md

# SanaControlNetModel

The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This model was contributed by [ishan24](https://huggingface.co/ishan24). ❤️
The original codebase can be found at [NVlabs/Sana](https://github.com/NVlabs/Sana), and you can find official ControlNet checkpoints on [Efficient-Large-Model's](https://huggingface.co/Efficient-Large-Model) Hub profile.

## SanaControlNetModel[[diffusers.SanaControlNetModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaControlNetModel</name><anchor>diffusers.SanaControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sana.py#L41</source><parameters>[{"name": "in_channels", "val": ": int = 32"}, {"name": "out_channels", "val": ": typing.Optional[int] = 32"}, {"name": "num_attention_heads", "val": ": int = 70"}, {"name": "attention_head_dim", "val": ": int = 32"}, {"name": "num_layers", "val": ": int = 7"}, {"name": "num_cross_attention_heads", "val": ": typing.Optional[int] = 20"}, {"name": "cross_attention_head_dim", "val": ": typing.Optional[int] = 112"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = 2240"}, {"name": "caption_channels", "val": ": int = 2304"}, {"name": "mlp_ratio", "val": ": float = 2.5"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "attention_bias", "val": ": bool = False"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "patch_size", "val": ": int = 1"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "interpolation_scale", "val": ": typing.Optional[int] = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.SanaControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sana.py#L146</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## SanaControlNetOutput[[diffusers.models.controlnets.controlnet_sana.SanaControlNetOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.controlnets.controlnet_sana.SanaControlNetOutput</name><anchor>diffusers.models.controlnets.controlnet_sana.SanaControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sana.py#L37</source><parameters>[{"name": "controlnet_block_samples", "val": ": typing.Tuple[torch.Tensor]"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_sana.md" />

### UNet3DConditionModel
https://huggingface.co/docs/diffusers/main/api/models/unet3d-cond.md

# UNet3DConditionModel

The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model.

The abstract from the paper is:

*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*

## UNet3DConditionModel[[diffusers.UNet3DConditionModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UNet3DConditionModel</name><anchor>diffusers.UNet3DConditionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L61</source><parameters>[{"name": "sample_size", "val": ": typing.Optional[int] = None"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D')"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 1024"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int]] = 64"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "time_cond_proj_dim", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample_size** (`int` or `Tuple[int, int]`, *optional*, defaults to `None`) --
  Height and width of input/output sample.
- **in_channels** (`int`, *optional*, defaults to 4) -- The number of channels in the input sample.
- **out_channels** (`int`, *optional*, defaults to 4) -- The number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")`) --
  The tuple of downsample blocks to use.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")`) --
  The tuple of upsample blocks to use.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, *optional*, defaults to 2) -- The number of layers per block.
- **downsample_padding** (`int`, *optional*, defaults to 1) -- The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, *optional*, defaults to 1.0) -- The scale factor to use for the mid block.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) -- The number of groups to use for the normalization.
  If `None`, normalization and activation layers is skipped in post-processing.
- **norm_eps** (`float`, *optional*, defaults to 1e-5) -- The epsilon to use for the normalization.
- **cross_attention_dim** (`int`, *optional*, defaults to 1024) -- The dimension of the cross attention features.
- **attention_head_dim** (`int`, *optional*, defaults to 64) -- The dimension of the attention heads.
- **num_attention_heads** (`int`, *optional*) -- The number of attention heads.
- **time_cond_proj_dim** (`int`, *optional*, defaults to `None`) --
  The dimension of `cond_proj` layer in the timestep embedding.</paramsdesc><paramgroups>0</paramgroups></docstring>

A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
shaped output.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_freeu</name><anchor>diffusers.UNet3DConditionModel.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L496</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_forward_chunking</name><anchor>diffusers.UNet3DConditionModel.enable_forward_chunking</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L414</source><parameters>[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]</parameters><paramsdesc>- **chunk_size** (`int`, *optional*) --
  The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
  over each tensor of dim=`dim`.
- **dim** (`int`, *optional*, defaults to `0`) --
  The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
  or dim=1 (sequence length).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use [feed forward
chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_freeu</name><anchor>diffusers.UNet3DConditionModel.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L471</source><parameters>[{"name": "s1", "val": ""}, {"name": "s2", "val": ""}, {"name": "b1", "val": ""}, {"name": "b2", "val": ""}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism from https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stage blocks where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that
are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.UNet3DConditionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L536</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "down_block_additional_residuals", "val": ": typing.Optional[typing.Tuple[torch.Tensor]] = None"}, {"name": "mid_block_additional_residual", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch, num_channels, num_frames, height, width`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** -- (`torch.Tensor`, *optional*, defaults to `None`):
  Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed
  through the `self.time_embedding` layer to obtain the timestep embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **down_block_additional_residuals** -- (`tuple` of `torch.Tensor`, *optional*):
  A tuple of tensors that if specified are added to the residuals of down unet blocks.
- **mid_block_additional_residual** -- (`torch.Tensor`, *optional*):
  A tensor that if specified is added to the residual of the middle unet block.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet3DConditionOutput](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput) instead of a plain
  tuple.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor`.</paramsdesc><paramgroups>0</paramgroups><rettype>[UNet3DConditionOutput](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput) or `tuple`</rettype><retdesc>If `return_dict` is True, an [UNet3DConditionOutput](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput) is returned,
otherwise a `tuple` is returned where the first element is the sample tensor.</retdesc></docstring>

The [UNet3DConditionModel](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.UNet3DConditionModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.UNet3DConditionModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L505</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_slice</name><anchor>diffusers.UNet3DConditionModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L314</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]]"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
  When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
  `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation.

When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.UNet3DConditionModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L380</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.UNet3DConditionModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L455</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.UNet3DConditionModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L527</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## UNet3DConditionOutput[[diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput</name><anchor>diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L49</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, num_frames, height, width)`) --
  The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [UNet3DConditionModel](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.UNet3DConditionModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/unet3d-cond.md" />

### AutoencoderKLCogVideoX
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_cogvideox.md

# AutoencoderKLCogVideoX

The 3D variational autoencoder (VAE) model with KL loss used in [CogVideoX](https://github.com/THUDM/CogVideo) was introduced in [CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://github.com/THUDM/CogVideo/blob/main/resources/CogVideoX.pdf) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLCogVideoX

vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", torch_dtype=torch.float16).to("cuda")
```

## AutoencoderKLCogVideoX[[diffusers.AutoencoderKLCogVideoX]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLCogVideoX</name><anchor>diffusers.AutoencoderKLCogVideoX</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L958</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('CogVideoXDownBlock3D', 'CogVideoXDownBlock3D', 'CogVideoXDownBlock3D', 'CogVideoXDownBlock3D')"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('CogVideoXUpBlock3D', 'CogVideoXUpBlock3D', 'CogVideoXUpBlock3D', 'CogVideoXUpBlock3D')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (128, 256, 256, 512)"}, {"name": "latent_channels", "val": ": int = 16"}, {"name": "layers_per_block", "val": ": int = 3"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "temporal_compression_ratio", "val": ": float = 4"}, {"name": "sample_height", "val": ": int = 480"}, {"name": "sample_width", "val": ": int = 720"}, {"name": "scaling_factor", "val": ": float = 1.15258426"}, {"name": "shift_factor", "val": ": typing.Optional[float] = None"}, {"name": "latents_mean", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "latents_std", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "force_upcast", "val": ": float = True"}, {"name": "use_quant_conv", "val": ": bool = False"}, {"name": "use_post_quant_conv", "val": ": bool = False"}, {"name": "invert_scale_latents", "val": ": bool = False"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int,  *optional*, defaults to 3) -- Number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`) --
  Tuple of downsample block types.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`) --
  Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of block output channels.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **sample_size** (`int`, *optional*, defaults to `32`) -- Sample input size.
- **scaling_factor** (`float`, *optional*, defaults to `1.15258426`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **force_upcast** (`bool`, *optional*, default to `True`) --
  If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
  can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
  can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix</paramsdesc><paramgroups>0</paramgroups></docstring>

A VAE model with KL loss for encoding images into latents and decoding latent representations into images. Used in
[CogVideoX](https://github.com/THUDM/CogVideo).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLCogVideoX.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLCogVideoX.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLCogVideoX.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1141</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLCogVideoX.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1127</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLCogVideoX.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1134</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLCogVideoX.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1091</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_overlap_factor_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_overlap_factor_width", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_overlap_factor_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension. Must be between 0 and 1. Setting a higher
  value might cause more tiles to be processed leading to slow down of the decoding process.
- **tile_overlap_factor_width** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive horizontal tiles. This is to ensure that there
  are no tiling artifacts produced across the width dimension. Must be between 0 and 1. Setting a higher
  value might cause more tiles to be processed leading to slow down of the decoding process.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLCogVideoX.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1345</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLCogVideoX.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py#L1271</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.

When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the
output, but they should be much less noticeable.








</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_cogvideox.md" />

### MochiTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/mochi_transformer3d.md

# MochiTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [Mochi-1 Preview](https://huggingface.co/genmo/mochi-1-preview) by Genmo.

The model can be loaded with the following code snippet.

```python
from diffusers import MochiTransformer3DModel

transformer = MochiTransformer3DModel.from_pretrained("genmo/mochi-1-preview", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
```

## MochiTransformer3DModel[[diffusers.MochiTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.MochiTransformer3DModel</name><anchor>diffusers.MochiTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_mochi.py#L309</source><parameters>[{"name": "patch_size", "val": ": int = 2"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_layers", "val": ": int = 48"}, {"name": "pooled_projection_dim", "val": ": int = 1536"}, {"name": "in_channels", "val": ": int = 12"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "qk_norm", "val": ": str = 'rms_norm'"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "time_embed_dim", "val": ": int = 256"}, {"name": "activation_fn", "val": ": str = 'swiglu'"}, {"name": "max_sequence_length", "val": ": int = 256"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **num_attention_heads** (`int`, defaults to `24`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of channels in each head.
- **num_layers** (`int`, defaults to `48`) --
  The number of layers of Transformer blocks to use.
- **in_channels** (`int`, defaults to `12`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `None`) --
  The number of channels in the output.
- **qk_norm** (`str`, defaults to `"rms_norm"`) --
  The normalization layer to use.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **time_embed_dim** (`int`, defaults to `256`) --
  Output dimension of timestep embeddings.
- **activation_fn** (`str`, defaults to `"swiglu"`) --
  Activation function to use in feed-forward.
- **max_sequence_length** (`int`, defaults to `256`) --
  The maximum sequence length of text embeddings supported.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data introduced in [Mochi](https://huggingface.co/genmo/mochi-1-preview).




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/mochi_transformer3d.md" />

### HunyuanDiT2DModel
https://huggingface.co/docs/diffusers/main/api/models/hunyuan_transformer2d.md

# HunyuanDiT2DModel

A Diffusion Transformer model for 2D data from [Hunyuan-DiT](https://github.com/Tencent/HunyuanDiT).

## HunyuanDiT2DModel[[diffusers.HunyuanDiT2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HunyuanDiT2DModel</name><anchor>diffusers.HunyuanDiT2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L203</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 88"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "sample_size", "val": " = 32"}, {"name": "hidden_size", "val": " = 1152"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "mlp_ratio", "val": ": float = 4.0"}, {"name": "learn_sigma", "val": ": bool = True"}, {"name": "cross_attention_dim", "val": ": int = 1024"}, {"name": "norm_type", "val": ": str = 'layer_norm'"}, {"name": "cross_attention_dim_t5", "val": ": int = 2048"}, {"name": "pooled_projection_dim", "val": ": int = 1024"}, {"name": "text_len", "val": ": int = 77"}, {"name": "text_len_t5", "val": ": int = 256"}, {"name": "use_style_cond_and_image_meta_size", "val": ": bool = True"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, *optional*, defaults to 16) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, *optional*, defaults to 88) --
  The number of channels in each head.
- **in_channels** (`int`, *optional*) --
  The number of channels in the input and output (specify if the input is **continuous**).
- **patch_size** (`int`, *optional*) --
  The size of the patch to use for the input.
- **activation_fn** (`str`, *optional*, defaults to `"geglu"`) --
  Activation function to use in feed-forward.
- **sample_size** (`int`, *optional*) --
  The width of the latent images. This is fixed during training since it is used to learn a number of
  position embeddings.
- **dropout** (`float`, *optional*, defaults to 0.0) --
  The dropout probability to use.
- **cross_attention_dim** (`int`, *optional*) --
  The number of dimension in the clip text embedding.
- **hidden_size** (`int`, *optional*) --
  The size of hidden layer in the conditioning embedding layers.
- **num_layers** (`int`, *optional*, defaults to 1) --
  The number of layers of Transformer blocks to use.
- **mlp_ratio** (`float`, *optional*, defaults to 4.0) --
  The ratio of the hidden layer size to the input size.
- **learn_sigma** (`bool`, *optional*, defaults to `True`) --
  Whether to predict variance.
- **cross_attention_dim_t5** (`int`, *optional*) --
  The number dimensions in t5 text embedding.
- **pooled_projection_dim** (`int`, *optional*) --
  The size of the pooled projection.
- **text_len** (`int`, *optional*) --
  The length of the clip text embedding.
- **text_len_t5** (`int`, *optional*) --
  The length of the T5 text embedding.
- **use_style_cond_and_image_meta_size** (`bool`,  *optional*) --
  Whether or not to use style condition and image meta size. True for version <=1.1, False for version >= 1.2</paramsdesc><paramgroups>0</paramgroups></docstring>

HunYuanDiT: Diffusion model with a Transformer backbone.

Inherit ModelMixin and ConfigMixin to be compatible with the sampler StableDiffusionPipeline of diffusers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_forward_chunking</name><anchor>diffusers.HunyuanDiT2DModel.enable_forward_chunking</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L532</source><parameters>[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]</parameters><paramsdesc>- **chunk_size** (`int`, *optional*) --
  The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
  over each tensor of dim=`dim`.
- **dim** (`int`, *optional*, defaults to `0`) --
  The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
  or dim=1 (sequence length).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use [feed forward
chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.HunyuanDiT2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L419</source><parameters>[{"name": "hidden_states", "val": ""}, {"name": "timestep", "val": ""}, {"name": "encoder_hidden_states", "val": " = None"}, {"name": "text_embedding_mask", "val": " = None"}, {"name": "encoder_hidden_states_t5", "val": " = None"}, {"name": "text_embedding_mask_t5", "val": " = None"}, {"name": "image_meta_size", "val": " = None"}, {"name": "style", "val": " = None"}, {"name": "image_rotary_emb", "val": " = None"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "return_dict", "val": " = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch size, dim, height, width)`) --
  The input tensor.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step.
- **encoder_hidden_states** ( `torch.Tensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. This is the output of `BertModel`.
- **text_embedding_mask** -- torch.Tensor
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. This is the output
  of `BertModel`.
- **encoder_hidden_states_t5** ( `torch.Tensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. This is the output of T5 Text Encoder.
- **text_embedding_mask_t5** -- torch.Tensor
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. This is the output
  of T5 Text Encoder.
- **image_meta_size** (torch.Tensor) --
  Conditional embedding indicate the image sizes
- **style** -- torch.Tensor:
  Conditional embedding indicate the style
- **image_rotary_emb** (`torch.Tensor`) --
  The image rotary embeddings to apply on query and key tensors during attention calculation.
- **return_dict** -- bool
  Whether to return a dictionary.</paramsdesc><paramgroups>0</paramgroups></docstring>

The [HunyuanDiT2DModel](/docs/diffusers/main/en/api/models/hunyuan_transformer2d#diffusers.HunyuanDiT2DModel) forward method.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.HunyuanDiT2DModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L322</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.HunyuanDiT2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L379</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.HunyuanDiT2DModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L413</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.HunyuanDiT2DModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/hunyuan_transformer_2d.py#L344</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/hunyuan_transformer2d.md" />

### ChromaTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/chroma_transformer.md

# ChromaTransformer2DModel

A modified flux Transformer model from [Chroma](https://huggingface.co/lodestones/Chroma)

## ChromaTransformer2DModel[[diffusers.ChromaTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ChromaTransformer2DModel</name><anchor>diffusers.ChromaTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_chroma.py#L370</source><parameters>[{"name": "patch_size", "val": ": int = 1"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 19"}, {"name": "num_single_layers", "val": ": int = 38"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "axes_dims_rope", "val": ": typing.Tuple[int, ...] = (16, 56, 56)"}, {"name": "approximator_num_channels", "val": ": int = 64"}, {"name": "approximator_hidden_dim", "val": ": int = 5120"}, {"name": "approximator_layers", "val": ": int = 5"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `1`) --
  Patch size to turn the input data into small patches.
- **in_channels** (`int`, defaults to `64`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `None`) --
  The number of channels in the output. If not specified, it defaults to `in_channels`.
- **num_layers** (`int`, defaults to `19`) --
  The number of layers of dual stream DiT blocks to use.
- **num_single_layers** (`int`, defaults to `38`) --
  The number of layers of single stream DiT blocks to use.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of dimensions to use for each attention head.
- **num_attention_heads** (`int`, defaults to `24`) --
  The number of attention heads to use.
- **joint_attention_dim** (`int`, defaults to `4096`) --
  The number of dimensions to use for the joint attention (embedding/channel dimension of
  `encoder_hidden_states`).
- **axes_dims_rope** (`Tuple[int]`, defaults to `(16, 56, 56)`) --
  The dimensions to use for the rotary positional embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in Flux, modified for Chroma.

Reference: https://huggingface.co/lodestones/Chroma





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.ChromaTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_chroma.py#L476</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "attention_mask", "val": ": Tensor = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_blocks_repeat", "val": ": bool = False"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch_size, image_sequence_length, in_channels)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch_size, text_sequence_length, joint_attention_dim)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **block_controlnet_hidden_states** -- (`list` of `torch.Tensor`):
  A list of tensors that if specified are added to the residuals of transformer blocks.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel) forward method.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/chroma_transformer.md" />

### AsymmetricAutoencoderKL
https://huggingface.co/docs/diffusers/main/api/models/asymmetricautoencoderkl.md

# AsymmetricAutoencoderKL

Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: [Designing a Better Asymmetric VQGAN for StableDiffusion](https://huggingface.co/papers/2306.04632) by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua.

The abstract from the paper is:

*StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN*

Evaluation results can be found in section 4.1 of the original paper.

## Available checkpoints

* [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5)
* [https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2](https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2)

## Example Usage

```python
from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline
from diffusers.utils import load_image, make_image_grid


prompt = "a photo of a person with beard"
img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"

original_image = load_image(img_url).resize((512, 512))
mask_image = load_image(mask_url).resize((512, 512))

pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5")
pipe.to("cuda")

image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0]
make_image_grid([original_image, mask_image, image], rows=1, cols=3)
```

## AsymmetricAutoencoderKL[[diffusers.AsymmetricAutoencoderKL]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AsymmetricAutoencoderKL</name><anchor>diffusers.AsymmetricAutoencoderKL</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_asym_kl.py#L26</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('DownEncoderBlock2D',)"}, {"name": "down_block_out_channels", "val": ": typing.Tuple[int, ...] = (64,)"}, {"name": "layers_per_down_block", "val": ": int = 1"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('UpDecoderBlock2D',)"}, {"name": "up_block_out_channels", "val": ": typing.Tuple[int, ...] = (64,)"}, {"name": "layers_per_up_block", "val": ": int = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "scaling_factor", "val": ": float = 0.18215"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int,  *optional*, defaults to 3) -- Number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`) --
  Tuple of downsample block types.
- **down_block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of down block output channels.
- **layers_per_down_block** (`int`, *optional*, defaults to `1`) --
  Number layers for down block.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`) --
  Tuple of upsample block types.
- **up_block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of up block output channels.
- **layers_per_up_block** (`int`, *optional*, defaults to `1`) --
  Number layers for up block.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **latent_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the latent space.
- **sample_size** (`int`, *optional*, defaults to `32`) -- Sample input size.
- **norm_num_groups** (`int`, *optional*, defaults to `32`) --
  Number of groups to use for the first normalization layer in ResNet blocks.
- **scaling_factor** (`float`, *optional*, defaults to 0.18215) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.</paramsdesc><paramgroups>0</paramgroups></docstring>

Designing a Better Asymmetric VQGAN for StableDiffusion https://huggingface.co/papers/2306.04632 . A VAE model with
KL loss for encoding images into latents and decoding latent representations into images.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AsymmetricAutoencoderKL.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_asym_kl.py#L158</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **mask** (`torch.Tensor`, *optional*, defaults to `None`) -- Optional inpainting mask.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/asymmetricautoencoderkl.md" />

### ControlNetUnionModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_union.md

# ControlNetUnionModel

ControlNetUnionModel is an implementation of ControlNet for Stable Diffusion XL.

The ControlNet model was introduced in [ControlNetPlus](https://github.com/xinsir6/ControlNetPlus) by xinsir6. It supports multiple conditioning inputs without increasing computation.

*We design a new architecture that can support 10+ control types in condition text-to-image generation and can generate high resolution images visually comparable with midjourney. The network is based on the original ControlNet architecture, we propose two new modules to: 1 Extend the original ControlNet to support different image conditions using the same network parameter. 2 Support multiple conditions input without increasing computation offload, which is especially important for designers who want to edit image in detail, different conditions use the same condition encoder, without adding extra computations or parameters.*

## Loading

By default the [ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel) should be loaded with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained).

```py
from diffusers import StableDiffusionXLControlNetUnionPipeline, ControlNetUnionModel

controlnet = ControlNetUnionModel.from_pretrained("xinsir/controlnet-union-sdxl-1.0")
pipe = StableDiffusionXLControlNetUnionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet)
```

## ControlNetUnionModel[[diffusers.ControlNetUnionModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ControlNetUnionModel</name><anchor>diffusers.ControlNetUnionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L84</source><parameters>[{"name": "in_channels", "val": ": int = 4"}, {"name": "conditioning_channels", "val": ": int = 3"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D')"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 1280"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "encoder_hid_dim", "val": ": typing.Optional[int] = None"}, {"name": "encoder_hid_dim_type", "val": ": typing.Optional[str] = None"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...], NoneType] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "class_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_time_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "num_class_embeds", "val": ": typing.Optional[int] = None"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "projection_class_embeddings_input_dim", "val": ": typing.Optional[int] = None"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (48, 96, 192, 384)"}, {"name": "global_pool_conditions", "val": ": bool = False"}, {"name": "addition_embed_type_num_heads", "val": ": int = 64"}, {"name": "num_control_type", "val": ": int = 6"}, {"name": "num_trans_channel", "val": ": int = 320"}, {"name": "num_trans_head", "val": ": int = 8"}, {"name": "num_trans_layer", "val": ": int = 1"}, {"name": "num_proj_channel", "val": ": int = 320"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to 4) --
  The number of channels in the input sample.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, defaults to 0) --
  The frequency shift to apply to the time embedding.
- **down_block_types** (`tuple[str]`, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
  The tuple of downsample blocks to use.
- **only_cross_attention** (`Union[bool, Tuple[bool]]`, defaults to `False`) --
- **block_out_channels** (`tuple[int]`, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, defaults to 2) --
  The number of layers per block.
- **downsample_padding** (`int`, defaults to 1) --
  The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, defaults to 1) --
  The scale factor to use for the mid block.
- **act_fn** (`str`, defaults to "silu") --
  The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) --
  The number of groups to use for the normalization. If None, normalization and activation layers is skipped
  in post-processing.
- **norm_eps** (`float`, defaults to 1e-5) --
  The epsilon to use for the normalization.
- **cross_attention_dim** (`int`, defaults to 1280) --
  The dimension of the cross attention features.
- **transformer_layers_per_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
  The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
  `~models.unet_2d_blocks.CrossAttnDownBlock2D`, `~models.unet_2d_blocks.CrossAttnUpBlock2D`,
  `~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`.
- **encoder_hid_dim** (`int`, *optional*, defaults to None) --
  If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim`
  dimension to `cross_attention_dim`.
- **encoder_hid_dim_type** (`str`, *optional*, defaults to `None`) --
  If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text
  embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`.
- **attention_head_dim** (`Union[int, Tuple[int]]`, defaults to 8) --
  The dimension of the attention heads.
- **use_linear_projection** (`bool`, defaults to `False`) --
- **class_embed_type** (`str`, *optional*, defaults to `None`) --
  The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None,
  `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`.
- **addition_embed_type** (`str`, *optional*, defaults to `None`) --
  Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or
  "text". "text" will use the `TextTimeEmbedding` layer.
- **num_class_embeds** (`int`, *optional*, defaults to 0) --
  Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing
  class conditioning with `class_embed_type` equal to `None`.
- **upcast_attention** (`bool`, defaults to `False`) --
- **resnet_time_scale_shift** (`str`, defaults to `"default"`) --
  Time scale shift config for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **projection_class_embeddings_input_dim** (`int`, *optional*, defaults to `None`) --
  The dimension of the `class_labels` input when `class_embed_type="projection"`. Required when
  `class_embed_type="projection"`.
- **controlnet_conditioning_channel_order** (`str`, defaults to `"rgb"`) --
  The channel order of conditional image. Will convert to `rgb` if it's `bgr`.
- **conditioning_embedding_out_channels** (`tuple[int]`, *optional*, defaults to `(48, 96, 192, 384)`) --
  The tuple of output channel for each block in the `conditioning_embedding` layer.
- **global_pool_conditions** (`bool`, defaults to `False`) --</paramsdesc><paramgroups>0</paramgroups></docstring>

A ControlNetUnion model.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.ControlNetUnionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L600</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": typing.List[torch.Tensor]"}, {"name": "control_type", "val": ": Tensor"}, {"name": "control_type_idx", "val": ": typing.List[int]"}, {"name": "conditioning_scale", "val": ": typing.Union[float, typing.List[float]] = 1.0"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Optional[typing.Dict[str, torch.Tensor]] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "from_multi", "val": ": bool = False"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor.
- **timestep** (`Union[torch.Tensor, float, int]`) --
  The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states.
- **controlnet_cond** (`List[torch.Tensor]`) --
  The conditional input tensors.
- **control_type** (`torch.Tensor`) --
  A tensor of shape `(batch, num_control_type)` with values `0` or `1` depending on whether the control
  type is used.
- **control_type_idx** (`List[int]`) --
  The indices of `control_type`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
  The scale factor for ControlNet outputs.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** (`torch.Tensor`, *optional*, defaults to `None`) --
  Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the
  timestep_embedding passed through the `self.time_embedding` layer to obtain the final timestep
  embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **added_cond_kwargs** (`dict`) --
  Additional conditions for the Stable Diffusion XL UNet.
- **cross_attention_kwargs** (`dict[str]`, *optional*, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor`.
- **from_multi** (`bool`, defaults to `False`) --
  Use standard scaling when called from `MultiControlNetUnionModel`.
- **guess_mode** (`bool`, defaults to `False`) --
  In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if
  you remove all prompts. A `guidance_scale` between 3.0 and 5.0 is recommended.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a `ControlNetOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`ControlNetOutput` **or** `tuple`</rettype><retdesc>If `return_dict` is `True`, a `ControlNetOutput` is returned, otherwise a tuple is
returned where the first element is the sample tensor.</retdesc></docstring>

The [ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_unet</name><anchor>diffusers.ControlNetUnionModel.from_unet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L388</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "load_weights_from_unet", "val": ": bool = True"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  The UNet model weights to copy to the [ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel). All configuration options are also
  copied where applicable.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [ControlNetUnionModel](/docs/diffusers/main/en/api/models/controlnet_union#diffusers.ControlNetUnionModel) from [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_slice</name><anchor>diffusers.ControlNetUnionModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L535</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]]"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
  When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
  `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation.

When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.ControlNetUnionModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L484</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.ControlNetUnionModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_union.py#L519</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_union.md" />

### AutoencoderKLCosmos
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_cosmos.md

# AutoencoderKLCosmos

[Cosmos Tokenizers](https://github.com/NVIDIA/Cosmos-Tokenizer).

Supported models:
- [nvidia/Cosmos-1.0-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8)

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLCosmos

vae = AutoencoderKLCosmos.from_pretrained("nvidia/Cosmos-1.0-Tokenizer-CV8x8x8", subfolder="vae")
```

## AutoencoderKLCosmos[[diffusers.AutoencoderKLCosmos]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLCosmos</name><anchor>diffusers.AutoencoderKLCosmos</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L878</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "latent_channels", "val": ": int = 16"}, {"name": "encoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512)"}, {"name": "decode_block_out_channels", "val": ": typing.Tuple[int, ...] = (256, 512, 512, 512)"}, {"name": "attention_resolutions", "val": ": typing.Tuple[int, ...] = (32,)"}, {"name": "resolution", "val": ": int = 1024"}, {"name": "num_layers", "val": ": int = 2"}, {"name": "patch_size", "val": ": int = 4"}, {"name": "patch_type", "val": ": str = 'haar'"}, {"name": "scaling_factor", "val": ": float = 1.0"}, {"name": "spatial_compression_ratio", "val": ": int = 8"}, {"name": "temporal_compression_ratio", "val": ": int = 8"}, {"name": "latents_mean", "val": ": typing.Optional[typing.List[float]] = [0.11362758, -0.0171717, 0.03071163, 0.02046862, 0.01931456, 0.02138567, 0.01999342, 0.02189187, 0.02011935, 0.01872694, 0.02168613, 0.02207148, 0.01986941, 0.01770413, 0.02067643, 0.02028245, 0.19125476, 0.04556972, 0.0595558, 0.05315534, 0.05496629, 0.05356264, 0.04856596, 0.05327453, 0.05410472, 0.05597149, 0.05524866, 0.05181874, 0.05071663, 0.05204537, 0.0564108, 0.05518042, 0.01306714, 0.03341161, 0.03847246, 0.02810185, 0.02790166, 0.02920026, 0.02823597, 0.02631033, 0.0278531, 0.02880507, 0.02977769, 0.03145441, 0.02888389, 0.03280773, 0.03484927, 0.03049198, -0.00197727, 0.07534957, 0.04963879, 0.05530893, 0.05410828, 0.05252541, 0.05029899, 0.05321025, 0.05149245, 0.0511921, 0.04643495, 0.04604527, 0.04631618, 0.04404101, 0.04403536, 0.04499495, -0.02994183, -0.04787003, -0.01064558, -0.01779824, -0.01490502, -0.02157517, -0.0204778, -0.02180816, -0.01945375, -0.02062863, -0.02192209, -0.02520639, -0.02246656, -0.02427533, -0.02683363, -0.02762006, 0.08019473, -0.13005368, -0.07568636, -0.06082374, -0.06036175, -0.05875364, -0.05921887, -0.05869788, -0.05273941, -0.052565, -0.05346428, -0.05456541, -0.053657, -0.05656897, -0.05728589, -0.05321847, 0.16718403, -0.00390146, 0.0379406, 0.0356561, 0.03554131, 0.03924074, 0.03873615, 0.04187329, 0.04226924, 0.04378717, 0.04684274, 0.05117614, 0.04547792, 0.05251586, 0.05048339, 0.04950784, 0.09564418, 0.0547128, 0.08183969, 0.07978633, 0.08076023, 0.08108605, 0.08011818, 0.07965573, 0.08187773, 0.08350263, 0.08101469, 0.0786941, 0.0774442, 0.07724521, 0.07830418, 0.07599796, -0.04987567, 0.05923908, -0.01058746, -0.01177603, -0.01116162, -0.01364149, -0.01546014, -0.0117213, -0.01780043, -0.01648314, -0.02100247, -0.02104417, -0.02482123, -0.02611689, -0.02561143, -0.02597336, -0.05364667, 0.08211684, 0.04686937, 0.04605641, 0.04304186, 0.0397355, 0.03686767, 0.04087112, 0.03704741, 0.03706401, 0.03120073, 0.03349091, 0.03319963, 0.03205781, 0.03195127, 0.03180481, 0.16427967, -0.11048453, -0.04595276, -0.04982893, -0.05213465, -0.04809378, -0.05080318, -0.04992863, -0.04493337, -0.0467619, -0.04884703, -0.04627892, -0.04913311, -0.04955709, -0.04533982, -0.04570218, -0.10612928, -0.05121198, -0.06761009, -0.07251801, -0.07265285, -0.07417855, -0.07202412, -0.07499027, -0.07625481, -0.07535747, -0.07638787, -0.07920305, -0.07596069, -0.07959418, -0.08265036, -0.07955471, -0.16888915, 0.0753242, 0.04062594, 0.03375093, 0.03337452, 0.03699376, 0.03651138, 0.03611023, 0.03555622, 0.03378554, 0.0300498, 0.03395559, 0.02941847, 0.03156432, 0.03431173, 0.03016853, -0.03415358, -0.01699573, -0.04029295, -0.04912157, -0.0498858, -0.04917918, -0.04918056, -0.0525189, -0.05325506, -0.05341973, -0.04983329, -0.04883146, -0.04985548, -0.04736718, -0.0462027, -0.04836091, 0.02055675, 0.03419799, -0.02907669, -0.04350509, -0.04156144, -0.04234421, -0.04446109, -0.04461774, -0.04882839, -0.04822346, -0.04502493, -0.0506244, -0.05146913, -0.04655267, -0.04862994, -0.04841615, 0.20312774, -0.07208502, -0.03635615, -0.03556088, -0.04246174, -0.04195838, -0.04293778, -0.04071276, -0.04240569, -0.04125213, -0.04395144, -0.03959096, -0.04044993, -0.04015875, -0.04088107, -0.03885176]"}, {"name": "latents_std", "val": ": typing.Optional[typing.List[float]] = [0.56700271, 0.65488982, 0.65589428, 0.66524369, 0.66619784, 0.6666382, 0.6720838, 0.66955978, 0.66928875, 0.67108786, 0.67092526, 0.67397463, 0.67894882, 0.67668313, 0.67769569, 0.67479557, 0.85245121, 0.8688373, 0.87348086, 0.88459337, 0.89135885, 0.8910504, 0.89714909, 0.89947474, 0.90201765, 0.90411824, 0.90692616, 0.90847772, 0.90648711, 0.91006982, 0.91033435, 0.90541548, 0.84960359, 0.85863352, 0.86895317, 0.88460612, 0.89245003, 0.89451706, 0.89931005, 0.90647358, 0.90338236, 0.90510076, 0.91008312, 0.90961218, 0.9123717, 0.91313171, 0.91435546, 0.91565102, 0.91877103, 0.85155135, 0.857804, 0.86998034, 0.87365264, 0.88161767, 0.88151032, 0.88758916, 0.89015514, 0.89245576, 0.89276224, 0.89450496, 0.90054202, 0.89994133, 0.90136105, 0.90114892, 0.77755755, 0.81456852, 0.81911844, 0.83137071, 0.83820474, 0.83890373, 0.84401101, 0.84425181, 0.84739357, 0.84798753, 0.85249585, 0.85114998, 0.85160935, 0.85626358, 0.85677862, 0.85641026, 0.69903517, 0.71697885, 0.71696913, 0.72583169, 0.72931731, 0.73254126, 0.73586977, 0.73734969, 0.73664582, 0.74084908, 0.74399322, 0.74471819, 0.74493188, 0.74824578, 0.75024873, 0.75274801, 0.8187142, 0.82251883, 0.82616025, 0.83164483, 0.84072375, 0.8396467, 0.84143305, 0.84880769, 0.8503468, 0.85196948, 0.85211051, 0.85386664, 0.85410017, 0.85439342, 0.85847849, 0.85385275, 0.67583984, 0.68259847, 0.69198853, 0.69928843, 0.70194328, 0.70467001, 0.70755547, 0.70917857, 0.71007699, 0.70963502, 0.71064079, 0.71027333, 0.71291167, 0.71537536, 0.71902508, 0.71604162, 0.72450989, 0.71979928, 0.72057378, 0.73035461, 0.73329622, 0.73660028, 0.73891461, 0.74279994, 0.74105692, 0.74002433, 0.74257588, 0.74416119, 0.74543899, 0.74694443, 0.74747062, 0.74586403, 0.90176988, 0.90990674, 0.91106802, 0.92163783, 0.92390233, 0.93056196, 0.93482202, 0.93642414, 0.93858379, 0.94064975, 0.94078934, 0.94325715, 0.94955301, 0.94814706, 0.95144123, 0.94923073, 0.49853548, 0.64968109, 0.6427654, 0.64966393, 0.6487664, 0.65203559, 0.6584242, 0.65351611, 0.65464371, 0.6574859, 0.65626335, 0.66123748, 0.66121179, 0.66077942, 0.66040152, 0.66474909, 0.61986589, 0.69138134, 0.6884557, 0.6955843, 0.69765401, 0.70015347, 0.70529598, 0.70468754, 0.70399523, 0.70479989, 0.70887572, 0.71126866, 0.7097227, 0.71249932, 0.71231949, 0.71175605, 0.35586974, 0.68723857, 0.68973219, 0.69958478, 0.6943453, 0.6995818, 0.70980215, 0.69899458, 0.70271689, 0.70095056, 0.69912851, 0.70522696, 0.70392174, 0.70916915, 0.70585734, 0.70373541, 0.98101336, 0.89024764, 0.89607251, 0.90678179, 0.91308665, 0.91812348, 0.91980827, 0.92480654, 0.92635667, 0.92887944, 0.93338072, 0.93468094, 0.93619436, 0.93906063, 0.94191772, 0.94471723, 0.83202779, 0.84106231, 0.84463632, 0.85829508, 0.86319661, 0.86751342, 0.86914337, 0.87085921, 0.87286359, 0.87537396, 0.87931138, 0.88054478, 0.8811838, 0.88872558, 0.88942474, 0.88934827, 0.44025335, 0.63061613, 0.63110614, 0.63601959, 0.6395812, 0.64104342, 0.65019929, 0.6502797, 0.64355946, 0.64657205, 0.64847094, 0.64728117, 0.64972943, 0.65162975, 0.65328044, 0.64914775]"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `3`) --
  Number of input channels.
- **out_channels** (`int`, defaults to `3`) --
  Number of output channels.
- **latent_channels** (`int`, defaults to `16`) --
  Number of latent channels.
- **encoder_block_out_channels** (`Tuple[int, ...]`, defaults to `(128, 256, 512, 512)`) --
  Number of output channels for each encoder down block.
- **decode_block_out_channels** (`Tuple[int, ...]`, defaults to `(256, 512, 512, 512)`) --
  Number of output channels for each decoder up block.
- **attention_resolutions** (`Tuple[int, ...]`, defaults to `(32,)`) --
  List of image/video resolutions at which to apply attention.
- **resolution** (`int`, defaults to `1024`) --
  Base image/video resolution used for computing whether a block should have attention layers.
- **num_layers** (`int`, defaults to `2`) --
  Number of resnet blocks in each encoder/decoder block.
- **patch_size** (`int`, defaults to `4`) --
  Patch size used for patching the input image/video.
- **patch_type** (`str`, defaults to `haar`) --
  Patch type used for patching the input image/video. Can be either `haar` or `rearrange`.
- **scaling_factor** (`float`, defaults to `1.0`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper. Not applicable in
  Cosmos, but we default to 1.0 for consistency.
- **spatial_compression_ratio** (`int`, defaults to `8`) --
  The spatial compression ratio to apply in the VAE. The number of downsample blocks is determined using
  this.
- **temporal_compression_ratio** (`int`, defaults to `8`) --
  The temporal compression ratio to apply in the VAE. The number of downsample blocks is determined using
  this.</paramsdesc><paramgroups>0</paramgroups></docstring>

Autoencoder used in [Cosmos](https://huggingface.co/papers/2501.03575).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLCosmos.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLCosmos.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLCosmos.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L1048</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLCosmos.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L1034</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLCosmos.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L1041</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLCosmos.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L1000</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_num_frames", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_num_frames", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_cosmos.md" />

### UVit2DModel
https://huggingface.co/docs/diffusers/main/api/models/uvit2d.md

# UVit2DModel

The [U-ViT](https://hf.co/papers/2301.11093) model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality.

The abstract from the paper is:

*Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet.*

## UVit2DModel[[diffusers.UVit2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UVit2DModel</name><anchor>diffusers.UVit2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L39</source><parameters>[{"name": "hidden_size", "val": ": int = 1024"}, {"name": "use_bias", "val": ": bool = False"}, {"name": "hidden_dropout", "val": ": float = 0.0"}, {"name": "cond_embed_dim", "val": ": int = 768"}, {"name": "micro_cond_encode_dim", "val": ": int = 256"}, {"name": "micro_cond_embed_dim", "val": ": int = 1280"}, {"name": "encoder_hidden_size", "val": ": int = 768"}, {"name": "vocab_size", "val": ": int = 8256"}, {"name": "codebook_size", "val": ": int = 8192"}, {"name": "in_channels", "val": ": int = 768"}, {"name": "block_out_channels", "val": ": int = 768"}, {"name": "num_res_blocks", "val": ": int = 3"}, {"name": "downsample", "val": ": bool = False"}, {"name": "upsample", "val": ": bool = False"}, {"name": "block_num_heads", "val": ": int = 12"}, {"name": "num_hidden_layers", "val": ": int = 22"}, {"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_dropout", "val": ": float = 0.0"}, {"name": "intermediate_size", "val": ": int = 2816"}, {"name": "layer_norm_eps", "val": ": float = 1e-06"}, {"name": "ln_elementwise_affine", "val": ": bool = True"}, {"name": "sample_size", "val": ": int = 64"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.UVit2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L238</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.UVit2DModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L273</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

## UVit2DConvEmbed[[diffusers.models.unets.uvit_2d.UVit2DConvEmbed]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.uvit_2d.UVit2DConvEmbed</name><anchor>diffusers.models.unets.uvit_2d.UVit2DConvEmbed</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L289</source><parameters>[{"name": "in_channels", "val": ""}, {"name": "block_out_channels", "val": ""}, {"name": "vocab_size", "val": ""}, {"name": "elementwise_affine", "val": ""}, {"name": "eps", "val": ""}, {"name": "bias", "val": ""}]</parameters></docstring>


</div>

## UVitBlock[[diffusers.models.unets.uvit_2d.UVitBlock]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.uvit_2d.UVitBlock</name><anchor>diffusers.models.unets.uvit_2d.UVitBlock</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L304</source><parameters>[{"name": "channels", "val": ""}, {"name": "num_res_blocks", "val": ": int"}, {"name": "hidden_size", "val": ""}, {"name": "hidden_dropout", "val": ""}, {"name": "ln_elementwise_affine", "val": ""}, {"name": "layer_norm_eps", "val": ""}, {"name": "use_bias", "val": ""}, {"name": "block_num_heads", "val": ""}, {"name": "attention_dropout", "val": ""}, {"name": "downsample", "val": ": bool"}, {"name": "upsample", "val": ": bool"}]</parameters></docstring>


</div>

## ConvNextBlock[[diffusers.models.unets.uvit_2d.ConvNextBlock]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.uvit_2d.ConvNextBlock</name><anchor>diffusers.models.unets.uvit_2d.ConvNextBlock</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L403</source><parameters>[{"name": "channels", "val": ""}, {"name": "layer_norm_eps", "val": ""}, {"name": "ln_elementwise_affine", "val": ""}, {"name": "use_bias", "val": ""}, {"name": "hidden_dropout", "val": ""}, {"name": "hidden_size", "val": ""}, {"name": "res_ffn_factor", "val": " = 4"}]</parameters></docstring>


</div>

## ConvMlmLayer[[diffusers.models.unets.uvit_2d.ConvMlmLayer]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.uvit_2d.ConvMlmLayer</name><anchor>diffusers.models.unets.uvit_2d.ConvMlmLayer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/uvit_2d.py#L448</source><parameters>[{"name": "block_out_channels", "val": ": int"}, {"name": "in_channels", "val": ": int"}, {"name": "use_bias", "val": ": bool"}, {"name": "ln_elementwise_affine", "val": ": bool"}, {"name": "layer_norm_eps", "val": ": float"}, {"name": "codebook_size", "val": ": int"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/uvit2d.md" />

### SanaTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/sana_transformer2d.md

# SanaTransformer2DModel

A Diffusion Transformer model for 2D data from [SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers](https://huggingface.co/papers/2410.10629) was introduced from NVIDIA and MIT HAN Lab, by Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, Song Han.

The abstract from the paper is:

*We introduce Sana, a text-to-image framework that can efficiently generate images up to 4096×4096 resolution. Sana can synthesize high-resolution, high-quality images with strong text-image alignment at a remarkably fast speed, deployable on laptop GPU. Core designs include: (1) Deep compression autoencoder: unlike traditional AEs, which compress images only 8×, we trained an AE that can compress images 32×, effectively reducing the number of latent tokens. (2) Linear DiT: we replace all vanilla attention in DiT with linear attention, which is more efficient at high resolutions without sacrificing quality. (3) Decoder-only text encoder: we replaced T5 with modern decoder-only small LLM as the text encoder and designed complex human instruction with in-context learning to enhance the image-text alignment. (4) Efficient training and sampling: we propose Flow-DPM-Solver to reduce sampling steps, with efficient caption labeling and selection to accelerate convergence. As a result, Sana-0.6B is very competitive with modern giant diffusion model (e.g. Flux-12B), being 20 times smaller and 100+ times faster in measured throughput. Moreover, Sana-0.6B can be deployed on a 16GB laptop GPU, taking less than 1 second to generate a 1024×1024 resolution image. Sana enables content creation at low cost. Code and model will be publicly released.*

The model can be loaded with the following code snippet.

```python
from diffusers import SanaTransformer2DModel

transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## SanaTransformer2DModel[[diffusers.SanaTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SanaTransformer2DModel</name><anchor>diffusers.SanaTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/sana_transformer.py#L292</source><parameters>[{"name": "in_channels", "val": ": int = 32"}, {"name": "out_channels", "val": ": typing.Optional[int] = 32"}, {"name": "num_attention_heads", "val": ": int = 70"}, {"name": "attention_head_dim", "val": ": int = 32"}, {"name": "num_layers", "val": ": int = 20"}, {"name": "num_cross_attention_heads", "val": ": typing.Optional[int] = 20"}, {"name": "cross_attention_head_dim", "val": ": typing.Optional[int] = 112"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = 2240"}, {"name": "caption_channels", "val": ": int = 2304"}, {"name": "mlp_ratio", "val": ": float = 2.5"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "attention_bias", "val": ": bool = False"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "patch_size", "val": ": int = 1"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "interpolation_scale", "val": ": typing.Optional[int] = None"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "guidance_embeds_scale", "val": ": float = 0.1"}, {"name": "qk_norm", "val": ": typing.Optional[str] = None"}, {"name": "timestep_scale", "val": ": float = 1.0"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `32`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `32`) --
  The number of channels in the output.
- **num_attention_heads** (`int`, defaults to `70`) --
  The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, defaults to `32`) --
  The number of channels in each head.
- **num_layers** (`int`, defaults to `20`) --
  The number of layers of Transformer blocks to use.
- **num_cross_attention_heads** (`int`, *optional*, defaults to `20`) --
  The number of heads to use for cross-attention.
- **cross_attention_head_dim** (`int`, *optional*, defaults to `112`) --
  The number of channels in each head for cross-attention.
- **cross_attention_dim** (`int`, *optional*, defaults to `2240`) --
  The number of channels in the cross-attention output.
- **caption_channels** (`int`, defaults to `2304`) --
  The number of channels in the caption embeddings.
- **mlp_ratio** (`float`, defaults to `2.5`) --
  The expansion ratio to use in the GLUMBConv layer.
- **dropout** (`float`, defaults to `0.0`) --
  The dropout probability.
- **attention_bias** (`bool`, defaults to `False`) --
  Whether to use bias in the attention layer.
- **sample_size** (`int`, defaults to `32`) --
  The base size of the input latent.
- **patch_size** (`int`, defaults to `1`) --
  The size of the patches to use in the patch embedding layer.
- **norm_elementwise_affine** (`bool`, defaults to `False`) --
  Whether to use elementwise affinity in the normalization layer.
- **norm_eps** (`float`, defaults to `1e-6`) --
  The epsilon value for the normalization layer.
- **qk_norm** (`str`, *optional*, defaults to `None`) --
  The normalization to use for the query and key.
- **timestep_scale** (`float`, defaults to `1.0`) --
  The scale to use for the timesteps.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D Transformer model introduced in [Sana](https://huggingface.co/papers/2410.10629) family of models.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.SanaTransformer2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/sana_transformer.py#L443</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/sana_transformer2d.md" />

### AutoencoderKLQwenImage
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_qwenimage.md

# AutoencoderKLQwenImage

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLQwenImage

vae = AutoencoderKLQwenImage.from_pretrained("Qwen/QwenImage-20B", subfolder="vae")
```

## AutoencoderKLQwenImage[[diffusers.AutoencoderKLQwenImage]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLQwenImage</name><anchor>diffusers.AutoencoderKLQwenImage</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L666</source><parameters>[{"name": "base_dim", "val": ": int = 96"}, {"name": "z_dim", "val": ": int = 16"}, {"name": "dim_mult", "val": ": typing.Tuple[int] = [1, 2, 4, 4]"}, {"name": "num_res_blocks", "val": ": int = 2"}, {"name": "attn_scales", "val": ": typing.List[float] = []"}, {"name": "temperal_downsample", "val": ": typing.List[bool] = [False, True, True]"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "latents_mean", "val": ": typing.List[float] = [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921]"}, {"name": "latents_std", "val": ": typing.List[float] = [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916]"}]</parameters></docstring>

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLQwenImage.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLQwenImage.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLQwenImage.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L780</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLQwenImage.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L766</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLQwenImage.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L773</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLQwenImage.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L736</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKLQwenImage.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L1049</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLQwenImage.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L986</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLQwenImage.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_qwenimage.py#L920</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.








</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_qwenimage.md" />

### DiTTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/dit_transformer2d.md

# DiTTransformer2DModel

A Transformer model for image-like data from [DiT](https://huggingface.co/papers/2212.09748).

## DiTTransformer2DModel[[diffusers.DiTTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DiTTransformer2DModel</name><anchor>diffusers.DiTTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/dit_transformer_2d.py#L31</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 72"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "num_embeds_ada_norm", "val": ": typing.Optional[int] = 1000"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "norm_type", "val": ": str = 'ada_norm_zero'"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-05"}]</parameters><paramsdesc>- **num_attention_heads** (int, optional, defaults to 16) -- The number of heads to use for multi-head attention.
- **attention_head_dim** (int, optional, defaults to 72) -- The number of channels in each head.
- **in_channels** (int, defaults to 4) -- The number of channels in the input.
- **out_channels** (int, optional) --
  The number of channels in the output. Specify this parameter if the output channel number differs from the
  input.
- **num_layers** (int, optional, defaults to 28) -- The number of layers of Transformer blocks to use.
- **dropout** (float, optional, defaults to 0.0) -- The dropout probability to use within the Transformer blocks.
- **norm_num_groups** (int, optional, defaults to 32) --
  Number of groups for group normalization within Transformer blocks.
- **attention_bias** (bool, optional, defaults to True) --
  Configure if the Transformer blocks' attention should contain a bias parameter.
- **sample_size** (int, defaults to 32) --
  The width of the latent images. This parameter is fixed during training.
- **patch_size** (int, defaults to 2) --
  Size of the patches the model processes, relevant for architectures working on non-sequential data.
- **activation_fn** (str, optional, defaults to "gelu-approximate") --
  Activation function to use in feed-forward networks within Transformer blocks.
- **num_embeds_ada_norm** (int, optional, defaults to 1000) --
  Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during
  inference.
- **upcast_attention** (bool, optional, defaults to False) --
  If true, upcasts the attention mechanism dimensions for potentially improved performance.
- **norm_type** (str, optional, defaults to "ada_norm_zero") --
  Specifies the type of normalization used, can be 'ada_norm_zero'.
- **norm_elementwise_affine** (bool, optional, defaults to False) --
  If true, enables element-wise affine parameters in the normalization layers.
- **norm_eps** (float, optional, defaults to 1e-5) --
  A small constant added to the denominator in normalization layers to prevent division by zero.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D Transformer model as introduced in DiT (https://huggingface.co/papers/2212.09748).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.DiTTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/dit_transformer_2d.py#L148</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "class_labels", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous) --
  Input `hidden_states`.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **class_labels** ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*) --
  Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
  `AdaLayerZeroNorm`.
- **cross_attention_kwargs** ( `Dict[str, Any]`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [DiTTransformer2DModel](/docs/diffusers/main/en/api/models/dit_transformer2d#diffusers.DiTTransformer2DModel) forward method.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/dit_transformer2d.md" />

### Tiny AutoEncoder
https://huggingface.co/docs/diffusers/main/api/models/autoencoder_tiny.md

# Tiny AutoEncoder

Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in [madebyollin/taesd](https://github.com/madebyollin/taesd) by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion's VAE that can quickly decode the latents in a [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) or [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) almost instantly.

To use with Stable Diffusion v-2.1:

```python
import torch
from diffusers import DiffusionPipeline, AutoencoderTiny

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image
```

To use with Stable Diffusion XL 1.0

```python
import torch
from diffusers import DiffusionPipeline, AutoencoderTiny

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image
```

## AutoencoderTiny[[diffusers.AutoencoderTiny]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderTiny</name><anchor>diffusers.AutoencoderTiny</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L41</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "encoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (64, 64, 64, 64)"}, {"name": "decoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (64, 64, 64, 64)"}, {"name": "act_fn", "val": ": str = 'relu'"}, {"name": "upsample_fn", "val": ": str = 'nearest'"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "upsampling_scaling_factor", "val": ": int = 2"}, {"name": "num_encoder_blocks", "val": ": typing.Tuple[int, ...] = (1, 3, 3, 3)"}, {"name": "num_decoder_blocks", "val": ": typing.Tuple[int, ...] = (3, 3, 3, 1)"}, {"name": "latent_magnitude", "val": ": int = 3"}, {"name": "latent_shift", "val": ": float = 0.5"}, {"name": "force_upcast", "val": ": bool = False"}, {"name": "scaling_factor", "val": ": float = 1.0"}, {"name": "shift_factor", "val": ": float = 0.0"}]</parameters><paramsdesc>- **in_channels** (`int`, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (`int`,  *optional*, defaults to 3) -- Number of channels in the output.
- **encoder_block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64, 64, 64, 64)`) --
  Tuple of integers representing the number of output channels for each encoder block. The length of the
  tuple should be equal to the number of encoder blocks.
- **decoder_block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64, 64, 64, 64)`) --
  Tuple of integers representing the number of output channels for each decoder block. The length of the
  tuple should be equal to the number of decoder blocks.
- **act_fn** (`str`, *optional*, defaults to `"relu"`) --
  Activation function to be used throughout the model.
- **latent_channels** (`int`, *optional*, defaults to 4) --
  Number of channels in the latent representation. The latent space acts as a compressed representation of
  the input image.
- **upsampling_scaling_factor** (`int`, *optional*, defaults to 2) --
  Scaling factor for upsampling in the decoder. It determines the size of the output image during the
  upsampling process.
- **num_encoder_blocks** (`Tuple[int]`, *optional*, defaults to `(1, 3, 3, 3)`) --
  Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The
  length of the tuple should be equal to the number of stages in the encoder. Each stage has a different
  number of encoder blocks.
- **num_decoder_blocks** (`Tuple[int]`, *optional*, defaults to `(3, 3, 3, 1)`) --
  Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The
  length of the tuple should be equal to the number of stages in the decoder. Each stage has a different
  number of decoder blocks.
- **latent_magnitude** (`float`, *optional*, defaults to 3.0) --
  Magnitude of the latent representation. This parameter scales the latent representation values to control
  the extent of information preservation.
- **latent_shift** (float, *optional*, defaults to 0.5) --
  Shift applied to the latent representation. This parameter controls the center of the latent space.
- **scaling_factor** (`float`, *optional*, defaults to 1.0) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper. For this
  Autoencoder, however, no such scaling factor was used, hence the value of 1.0 as the default.
- **force_upcast** (`bool`, *optional*, default to `False`) --
  If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
  can be fine-tuned / trained to a lower range without losing too much precision, in which case
  `force_upcast` can be set to `False` (see this fp16-friendly
  [AutoEncoder](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)).</paramsdesc><paramgroups>0</paramgroups></docstring>

A tiny distilled VAE model for encoding images into latents and decoding latent representations into images.

[AutoencoderTiny](/docs/diffusers/main/en/api/models/autoencoder_tiny#diffusers.AutoencoderTiny) is a wrapper around the original implementation of `TAESD`.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for its generic methods implemented for
all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderTiny.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L172</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderTiny.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L187</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderTiny.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L165</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderTiny.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L179</source><parameters>[{"name": "use_tiling", "val": ": bool = True"}]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderTiny.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L321</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_latents</name><anchor>diffusers.AutoencoderTiny.scale_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L157</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters></docstring>
raw latents -> [0, 1]

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unscale_latents</name><anchor>diffusers.AutoencoderTiny.unscale_latents</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L161</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters></docstring>
[0, 1] -> raw latents

</div></div>

## AutoencoderTinyOutput[[diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput</name><anchor>diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_tiny.py#L29</source><parameters>[{"name": "latents", "val": ": Tensor"}]</parameters><paramsdesc>- **latents** (`torch.Tensor`) -- Encoded outputs of the `Encoder`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderTiny encoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoder_tiny.md" />

### AutoencoderKL
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl.md

# AutoencoderKL

The variational autoencoder (VAE) model with KL loss was introduced in [Auto-Encoding Variational Bayes](https://huggingface.co/papers/1312.6114v11) by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images.

The abstract from the paper is:

*How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.*

## Loading from the original format

By default the [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL) should be loaded with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained), but it can also be loaded
from the original format using `FromOriginalModelMixin.from_single_file` as follows:

```py
from diffusers import AutoencoderKL

url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors"  # can also be a local file
model = AutoencoderKL.from_single_file(url)
```

## AutoencoderKL[[diffusers.AutoencoderKL]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKL</name><anchor>diffusers.AutoencoderKL</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L38</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "down_block_types", "val": ": typing.Tuple[str] = ('DownEncoderBlock2D',)"}, {"name": "up_block_types", "val": ": typing.Tuple[str] = ('UpDecoderBlock2D',)"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (64,)"}, {"name": "layers_per_block", "val": ": int = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "scaling_factor", "val": ": float = 0.18215"}, {"name": "shift_factor", "val": ": typing.Optional[float] = None"}, {"name": "latents_mean", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "latents_std", "val": ": typing.Optional[typing.Tuple[float]] = None"}, {"name": "force_upcast", "val": ": bool = True"}, {"name": "use_quant_conv", "val": ": bool = True"}, {"name": "use_post_quant_conv", "val": ": bool = True"}, {"name": "mid_block_add_attention", "val": ": bool = True"}]</parameters><paramsdesc>- **in_channels** (int, *optional*, defaults to 3) -- Number of channels in the input image.
- **out_channels** (int,  *optional*, defaults to 3) -- Number of channels in the output.
- **down_block_types** (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`) --
  Tuple of downsample block types.
- **up_block_types** (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`) --
  Tuple of upsample block types.
- **block_out_channels** (`Tuple[int]`, *optional*, defaults to `(64,)`) --
  Tuple of block output channels.
- **act_fn** (`str`, *optional*, defaults to `"silu"`) -- The activation function to use.
- **latent_channels** (`int`, *optional*, defaults to 4) -- Number of channels in the latent space.
- **sample_size** (`int`, *optional*, defaults to `32`) -- Sample input size.
- **scaling_factor** (`float`, *optional*, defaults to 0.18215) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **force_upcast** (`bool`, *optional*, default to `True`) --
  If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
  can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
  can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
- **mid_block_add_attention** (`bool`, *optional*, default to `True`) --
  If enabled, the mid_block of the Encoder and Decoder will have attention blocks. If set to false, the
  mid_block will only have resnet blocks</paramsdesc><paramgroups>0</paramgroups></docstring>

A VAE model with KL loss for encoding images into latents and decoding latent representations into images.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKL.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKL.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKL.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L163</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKL.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L149</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKL.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L156</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKL.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L141</source><parameters>[{"name": "use_tiling", "val": ": bool = True"}]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKL.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L501</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.AutoencoderKL.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L530</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.AutoencoderKL.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L196</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.AutoencoderKL.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L231</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKL.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L452</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKL.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L390</source><parameters>[{"name": "x", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of images.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.autoencoder_kl.AutoencoderKLOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.autoencoder_kl.AutoencoderKLOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.autoencoder_kl.AutoencoderKLOutput` is returned, otherwise a plain
`tuple` is returned.</retdesc></docstring>
Encode a batch of images using a tiled encoder.

When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the
output, but they should be much less noticeable.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.AutoencoderKL.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl.py#L552</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl.md" />

### FluxControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_flux.md

# FluxControlNetModel

FluxControlNetModel is an implementation of ControlNet for Flux.1.

The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

## Loading from the original format

By default the [FluxControlNetModel](/docs/diffusers/main/en/api/models/controlnet_flux#diffusers.FluxControlNetModel) should be loaded with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained).

```py
from diffusers import FluxControlNetPipeline
from diffusers.models import FluxControlNetModel, FluxMultiControlNetModel

controlnet = FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-Controlnet-Canny")
pipe = FluxControlNetPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", controlnet=controlnet)

controlnet = FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-Controlnet-Canny")
controlnet = FluxMultiControlNetModel([controlnet])
pipe = FluxControlNetPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", controlnet=controlnet)
```

## FluxControlNetModel[[diffusers.FluxControlNetModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxControlNetModel</name><anchor>diffusers.FluxControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_flux.py#L41</source><parameters>[{"name": "patch_size", "val": ": int = 1"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "num_layers", "val": ": int = 19"}, {"name": "num_single_layers", "val": ": int = 38"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "pooled_projection_dim", "val": ": int = 768"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "axes_dims_rope", "val": ": typing.List[int] = [16, 56, 56]"}, {"name": "num_mode", "val": ": int = None"}, {"name": "conditioning_embedding_channels", "val": ": int = None"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.FluxControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_flux.py#L213</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "controlnet_mode", "val": ": Tensor = None"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.FloatTensor` of shape `(batch size, channel, height, width)`) --
  Input `hidden_states`.
- **controlnet_cond** (`torch.Tensor`) --
  The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`.
- **controlnet_mode** (`torch.Tensor`) --
  The mode tensor of shape `(batch_size, 1)`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
  The scale factor for ControlNet outputs.
- **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch size, sequence_len, embed_dims)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) -- Embeddings projected
  from the embeddings of input conditions.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **block_controlnet_hidden_states** -- (`list` of `torch.Tensor`):
  A list of tensors that if specified are added to the residuals of transformer blocks.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel) forward method.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.FluxControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_flux.py#L147</source><parameters>[{"name": "processor", "val": ""}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## FluxControlNetOutput[[diffusers.models.controlnet_flux.FluxControlNetOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.controlnet_flux.FluxControlNetOutput</name><anchor>diffusers.models.controlnet_flux.FluxControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnet_flux.py#L25</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_flux.md" />

### AutoencoderKLHunyuanVideo
https://huggingface.co/docs/diffusers/main/api/models/autoencoder_kl_hunyuan_video.md

# AutoencoderKLHunyuanVideo

The 3D variational autoencoder (VAE) model with KL loss used in [HunyuanVideo](https://github.com/Tencent/HunyuanVideo/), which was introduced in [HunyuanVideo: A Systematic Framework For Large Video Generative Models](https://huggingface.co/papers/2412.03603) by Tencent.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLHunyuanVideo

vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="vae", torch_dtype=torch.float16)
```

## AutoencoderKLHunyuanVideo[[diffusers.AutoencoderKLHunyuanVideo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLHunyuanVideo</name><anchor>diffusers.AutoencoderKLHunyuanVideo</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L627</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "latent_channels", "val": ": int = 16"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D')"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int] = (128, 256, 512, 512)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "scaling_factor", "val": ": float = 0.476986"}, {"name": "spatial_compression_ratio", "val": ": int = 8"}, {"name": "temporal_compression_ratio", "val": ": int = 4"}, {"name": "mid_block_add_attention", "val": ": bool = True"}]</parameters></docstring>

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos.
Introduced in [HunyuanVideo](https://huggingface.co/papers/2412.03603).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLHunyuanVideo.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLHunyuanVideo.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L780</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLHunyuanVideo.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L766</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLHunyuanVideo.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L773</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLHunyuanVideo.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L726</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_num_frames", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_num_frames", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_min_num_frames** (`int`, *optional*) --
  The minimum number of frames required for a sample to be separated into tiles across the frame
  dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.
- **tile_sample_stride_num_frames** (`int`, *optional*) --
  The stride between two consecutive frame tiles. This is to ensure that there are no tiling artifacts
  produced across the frame dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKLHunyuanVideo.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L1073</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLHunyuanVideo.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L948</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLHunyuanVideo.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L898</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.








</div></div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoder_kl_hunyuan_video.md" />

### StableAudioDiTModel
https://huggingface.co/docs/diffusers/main/api/models/stable_audio_transformer.md

# StableAudioDiTModel

A Transformer model for audio waveforms from [Stable Audio Open](https://huggingface.co/papers/2407.14358).

## StableAudioDiTModel[[diffusers.StableAudioDiTModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.StableAudioDiTModel</name><anchor>diffusers.StableAudioDiTModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/stable_audio_transformer.py#L185</source><parameters>[{"name": "sample_size", "val": ": int = 1024"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "num_layers", "val": ": int = 24"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "num_key_value_attention_heads", "val": ": int = 12"}, {"name": "out_channels", "val": ": int = 64"}, {"name": "cross_attention_dim", "val": ": int = 768"}, {"name": "time_proj_dim", "val": ": int = 256"}, {"name": "global_states_input_dim", "val": ": int = 1536"}, {"name": "cross_attention_input_dim", "val": ": int = 768"}]</parameters><paramsdesc>- **sample_size** ( `int`, *optional*, defaults to 1024) -- The size of the input sample.
- **in_channels** (`int`, *optional*, defaults to 64) -- The number of channels in the input.
- **num_layers** (`int`, *optional*, defaults to 24) -- The number of layers of Transformer blocks to use.
- **attention_head_dim** (`int`, *optional*, defaults to 64) -- The number of channels in each head.
- **num_attention_heads** (`int`, *optional*, defaults to 24) -- The number of heads to use for the query states.
- **num_key_value_attention_heads** (`int`, *optional*, defaults to 12) --
  The number of heads to use for the key and value states.
- **out_channels** (`int`, defaults to 64) -- Number of output channels.
- **cross_attention_dim** ( `int`, *optional*, defaults to 768) -- Dimension of the cross-attention projection.
- **time_proj_dim** ( `int`, *optional*, defaults to 256) -- Dimension of the timestep inner projection.
- **global_states_input_dim** ( `int`, *optional*, defaults to 1536) --
  Input dimension of the global hidden states projection.
- **cross_attention_input_dim** ( `int`, *optional*, defaults to 768) --
  Input dimension of the cross-attention projection</paramsdesc><paramgroups>0</paramgroups></docstring>

The Diffusion Transformer model introduced in Stable Audio.

Reference: https://github.com/Stability-AI/stable-audio-tools





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.StableAudioDiTModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/stable_audio_transformer.py#L344</source><parameters>[{"name": "hidden_states", "val": ": FloatTensor"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "encoder_hidden_states", "val": ": FloatTensor = None"}, {"name": "global_hidden_states", "val": ": FloatTensor = None"}, {"name": "rotary_embedding", "val": ": FloatTensor = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.LongTensor] = None"}]</parameters><paramsdesc>- **hidden_states** (`torch.FloatTensor` of shape `(batch size, in_channels, sequence_len)`) --
  Input `hidden_states`.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch size, encoder_sequence_len, cross_attention_input_dim)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **global_hidden_states** (`torch.FloatTensor` of shape `(batch size, global_sequence_len, global_states_input_dim)`) --
  Global embeddings that will be prepended to the hidden states.
- **rotary_embedding** (`torch.Tensor`) --
  The rotary embeddings to apply on query and key tensors during attention calculation.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.
- **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_len)`, *optional*) --
  Mask to avoid performing attention on padding token indices, formed by concatenating the attention
  masks
  for the two text encoders together. Mask values selected in `[0, 1]`:

  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.
- **encoder_attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_len)`, *optional*) --
  Mask to avoid performing attention on padding token cross-attention indices, formed by concatenating
  the attention masks
  for the two text encoders together. Mask values selected in `[0, 1]`:

  - 1 for tokens that are **not masked**,
  - 0 for tokens that are **masked**.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [StableAudioDiTModel](/docs/diffusers/main/en/api/models/stable_audio_transformer#diffusers.StableAudioDiTModel) forward method.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.StableAudioDiTModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/stable_audio_transformer.py#L303</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.StableAudioDiTModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/stable_audio_transformer.py#L338</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/stable_audio_transformer.md" />

### PixArtTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/pixart_transformer2d.md

# PixArtTransformer2DModel

A Transformer model for image-like data from [PixArt-Alpha](https://huggingface.co/papers/2310.00426) and [PixArt-Sigma](https://huggingface.co/papers/2403.04692).

## PixArtTransformer2DModel[[diffusers.PixArtTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PixArtTransformer2DModel</name><anchor>diffusers.PixArtTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L32</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 72"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": typing.Optional[int] = 8"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = 1152"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_size", "val": ": int = 128"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "num_embeds_ada_norm", "val": ": typing.Optional[int] = 1000"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "norm_type", "val": ": str = 'ada_norm_single'"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "interpolation_scale", "val": ": typing.Optional[int] = None"}, {"name": "use_additional_conditions", "val": ": typing.Optional[bool] = None"}, {"name": "caption_channels", "val": ": typing.Optional[int] = None"}, {"name": "attention_type", "val": ": typing.Optional[str] = 'default'"}]</parameters><paramsdesc>- **num_attention_heads** (int, optional, defaults to 16) -- The number of heads to use for multi-head attention.
- **attention_head_dim** (int, optional, defaults to 72) -- The number of channels in each head.
- **in_channels** (int, defaults to 4) -- The number of channels in the input.
- **out_channels** (int, optional) --
  The number of channels in the output. Specify this parameter if the output channel number differs from the
  input.
- **num_layers** (int, optional, defaults to 28) -- The number of layers of Transformer blocks to use.
- **dropout** (float, optional, defaults to 0.0) -- The dropout probability to use within the Transformer blocks.
- **norm_num_groups** (int, optional, defaults to 32) --
  Number of groups for group normalization within Transformer blocks.
- **cross_attention_dim** (int, optional) --
  The dimensionality for cross-attention layers, typically matching the encoder's hidden dimension.
- **attention_bias** (bool, optional, defaults to True) --
  Configure if the Transformer blocks' attention should contain a bias parameter.
- **sample_size** (int, defaults to 128) --
  The width of the latent images. This parameter is fixed during training.
- **patch_size** (int, defaults to 2) --
  Size of the patches the model processes, relevant for architectures working on non-sequential data.
- **activation_fn** (str, optional, defaults to "gelu-approximate") --
  Activation function to use in feed-forward networks within Transformer blocks.
- **num_embeds_ada_norm** (int, optional, defaults to 1000) --
  Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during
  inference.
- **upcast_attention** (bool, optional, defaults to False) --
  If true, upcasts the attention mechanism dimensions for potentially improved performance.
- **norm_type** (str, optional, defaults to "ada_norm_zero") --
  Specifies the type of normalization used, can be 'ada_norm_zero'.
- **norm_elementwise_affine** (bool, optional, defaults to False) --
  If true, enables element-wise affine parameters in the normalization layers.
- **norm_eps** (float, optional, defaults to 1e-6) --
  A small constant added to the denominator in normalization layers to prevent division by zero.
- **interpolation_scale** (int, optional) -- Scale factor to use during interpolating the position embeddings.
- **use_additional_conditions** (bool, optional) -- If we're using additional conditions as inputs.
- **attention_type** (str, optional, defaults to "default") -- Kind of attention mechanism to be used.
- **caption_channels** (int, optional, defaults to None) --
  Number of channels to use for projecting the caption embeddings.
- **use_linear_projection** (bool, optional, defaults to False) --
  Deprecated argument. Will be removed in a future version.
- **num_vector_embeds** (bool, optional, defaults to False) --
  Deprecated argument. Will be removed in a future version.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D Transformer model as introduced in PixArt family of models (https://huggingface.co/papers/2310.00426,
https://huggingface.co/papers/2403.04692).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.PixArtTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L287</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Dict[str, torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.FloatTensor` of shape `(batch size, channel, height, width)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
  self-attention.
- **timestep** (`torch.LongTensor`, *optional*) --
  Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **added_cond_kwargs** -- (`Dict[str, Any]`, *optional*): Additional conditions to be used as inputs.
- **cross_attention_kwargs** ( `Dict[str, Any]`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **attention_mask** ( `torch.Tensor`, *optional*) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **encoder_attention_mask** ( `torch.Tensor`, *optional*) --
  Cross-attention mask applied to `encoder_hidden_states`. Two formats supported:

  * Mask `(batch, sequence_length)` True = keep, False = discard.
  * Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard.

  If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format
  above. This bias will be added to the cross-attention scores.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [PixArtTransformer2DModel](/docs/diffusers/main/en/api/models/pixart_transformer2d#diffusers.PixArtTransformer2DModel) forward method.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.PixArtTransformer2DModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L256</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.PixArtTransformer2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L213</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.PixArtTransformer2DModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L247</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.

Safe to just use `AttnProcessor()` as PixArt doesn't have any exotic attention processors in default model.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.PixArtTransformer2DModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/pixart_transformer_2d.py#L278</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/pixart_transformer2d.md" />

### SD3 Transformer Model
https://huggingface.co/docs/diffusers/main/api/models/sd3_transformer2d.md

# SD3 Transformer Model

The Transformer model introduced in [Stable Diffusion 3](https://hf.co/papers/2403.03206). Its novelty lies in the MMDiT transformer block.

## SD3Transformer2DModel[[diffusers.SD3Transformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SD3Transformer2DModel</name><anchor>diffusers.SD3Transformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L80</source><parameters>[{"name": "sample_size", "val": ": int = 128"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "num_layers", "val": ": int = 18"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "num_attention_heads", "val": ": int = 18"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "caption_projection_dim", "val": ": int = 1152"}, {"name": "pooled_projection_dim", "val": ": int = 2048"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "pos_embed_max_size", "val": ": int = 96"}, {"name": "dual_attention_layers", "val": ": typing.Tuple[int, ...] = ()"}, {"name": "qk_norm", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **sample_size** (`int`, defaults to `128`) --
  The width/height of the latents. This is fixed during training since it is used to learn a number of
  position embeddings.
- **patch_size** (`int`, defaults to `2`) --
  Patch size to turn the input data into small patches.
- **in_channels** (`int`, defaults to `16`) --
  The number of latent channels in the input.
- **num_layers** (`int`, defaults to `18`) --
  The number of layers of transformer blocks to use.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **num_attention_heads** (`int`, defaults to `18`) --
  The number of heads to use for multi-head attention.
- **joint_attention_dim** (`int`, defaults to `4096`) --
  The embedding dimension to use for joint text-image attention.
- **caption_projection_dim** (`int`, defaults to `1152`) --
  The embedding dimension of caption embeddings.
- **pooled_projection_dim** (`int`, defaults to `2048`) --
  The embedding dimension of pooled text projections.
- **out_channels** (`int`, defaults to `16`) --
  The number of latent channels in the output.
- **pos_embed_max_size** (`int`, defaults to `96`) --
  The maximum latent height/width of positional embeddings.
- **dual_attention_layers** (`Tuple[int, ...]`, defaults to `()`) --
  The number of dual-stream transformer blocks to use.
- **qk_norm** (`str`, *optional*, defaults to `None`) --
  The normalization to use for query and key in the attention layer. If `None`, no normalization is used.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in [Stable Diffusion 3](https://huggingface.co/papers/2403.03206).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_forward_chunking</name><anchor>diffusers.SD3Transformer2DModel.enable_forward_chunking</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L176</source><parameters>[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]</parameters><paramsdesc>- **chunk_size** (`int`, *optional*) --
  The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
  over each tensor of dim=`dim`.
- **dim** (`int`, *optional*, defaults to `0`) --
  The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
  or dim=1 (sequence length).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use [feed forward
chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.SD3Transformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L309</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "block_controlnet_hidden_states", "val": ": typing.List = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "skip_layers", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch size, channel, height, width)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch size, sequence_len, embed_dims)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **pooled_projections** (`torch.Tensor` of shape `(batch_size, projection_dim)`) --
  Embeddings projected from the embeddings of input conditions.
- **timestep** (`torch.LongTensor`) --
  Used to indicate denoising step.
- **block_controlnet_hidden_states** (`list` of `torch.Tensor`) --
  A list of tensors that if specified are added to the residuals of transformer blocks.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.
- **skip_layers** (`list` of `int`, *optional*) --
  A list of layer indices to skip during the forward pass.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel) forward method.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.SD3Transformer2DModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L278</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.SD3Transformer2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L243</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.SD3Transformer2DModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_sd3.py#L300</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/sd3_transformer2d.md" />

### CogView3PlusTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/cogview3plus_transformer2d.md

# CogView3PlusTransformer2DModel

A Diffusion Transformer model for 2D data from [CogView3Plus](https://github.com/THUDM/CogView3) was introduced in [CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion](https://huggingface.co/papers/2403.05121) by Tsinghua University & ZhipuAI.

The model can be loaded with the following code snippet.

```python
from diffusers import CogView3PlusTransformer2DModel

transformer = CogView3PlusTransformer2DModel.from_pretrained("THUDM/CogView3Plus-3b", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
```

## CogView3PlusTransformer2DModel[[diffusers.CogView3PlusTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogView3PlusTransformer2DModel</name><anchor>diffusers.CogView3PlusTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_cogview3plus.py#L128</source><parameters>[{"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "num_layers", "val": ": int = 30"}, {"name": "attention_head_dim", "val": ": int = 40"}, {"name": "num_attention_heads", "val": ": int = 64"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "condition_dim", "val": ": int = 256"}, {"name": "pos_embed_max_size", "val": ": int = 128"}, {"name": "sample_size", "val": ": int = 128"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `2`) --
  The size of the patches to use in the patch embedding layer.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **num_layers** (`int`, defaults to `30`) --
  The number of layers of Transformer blocks to use.
- **attention_head_dim** (`int`, defaults to `40`) --
  The number of channels in each head.
- **num_attention_heads** (`int`, defaults to `64`) --
  The number of heads to use for multi-head attention.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **text_embed_dim** (`int`, defaults to `4096`) --
  Input dimension of text embeddings from the text encoder.
- **time_embed_dim** (`int`, defaults to `512`) --
  Output dimension of timestep embeddings.
- **condition_dim** (`int`, defaults to `256`) --
  The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size,
  crop_coords).
- **pos_embed_max_size** (`int`, defaults to `128`) --
  The maximum resolution of the positional embeddings, from which slices of shape `H x W` are taken and added
  to input patched latents, where `H` and `W` are the latent height and width respectively. A value of 128
  means that the maximum supported height and width for image generation is `128 * vae_scale_factor *
  patch_size => 128 * 8 * 2 => 2048`.
- **sample_size** (`int`, defaults to `128`) --
  The base resolution of input latents. If height/width is not provided during generation, this value is used
  to determine the resolution as `sample_size * vae_scale_factor => 128 * 8 => 1024`</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in [CogView3: Finer and Faster Text-to-Image Generation via Relay
Diffusion](https://huggingface.co/papers/2403.05121).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.CogView3PlusTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_cogview3plus.py#L287</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": LongTensor"}, {"name": "original_size", "val": ": Tensor"}, {"name": "target_size", "val": ": Tensor"}, {"name": "crop_coords", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor`) --
  Input `hidden_states` of shape `(batch size, channel, height, width)`.
- **encoder_hidden_states** (`torch.Tensor`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) of shape
  `(batch_size, sequence_len, text_embed_dim)`
- **timestep** (`torch.LongTensor`) --
  Used to indicate denoising step.
- **original_size** (`torch.Tensor`) --
  CogView3 uses SDXL-like micro-conditioning for original image size as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **target_size** (`torch.Tensor`) --
  CogView3 uses SDXL-like micro-conditioning for target image size as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **crop_coords** (`torch.Tensor`) --
  CogView3 uses SDXL-like micro-conditioning for crop coordinates as explained in section 2.2 of
  [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor` or `~models.transformer_2d.Transformer2DModelOutput`</rettype><retdesc>The denoised latents using provided inputs as conditioning.</retdesc></docstring>

The [CogView3PlusTransformer2DModel](/docs/diffusers/main/en/api/models/cogview3plus_transformer2d#diffusers.CogView3PlusTransformer2DModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.CogView3PlusTransformer2DModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_cogview3plus.py#L253</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/cogview3plus_transformer2d.md" />

### LuminaNextDiT2DModel
https://huggingface.co/docs/diffusers/main/api/models/lumina_nextdit2d.md

# LuminaNextDiT2DModel

A Next Version of Diffusion Transformer model for 2D data from [Lumina-T2X](https://github.com/Alpha-VLLM/Lumina-T2X).

## LuminaNextDiT2DModel[[diffusers.LuminaNextDiT2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LuminaNextDiT2DModel</name><anchor>diffusers.LuminaNextDiT2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/lumina_nextdit2d.py#L178</source><parameters>[{"name": "sample_size", "val": ": int = 128"}, {"name": "patch_size", "val": ": typing.Optional[int] = 2"}, {"name": "in_channels", "val": ": typing.Optional[int] = 4"}, {"name": "hidden_size", "val": ": typing.Optional[int] = 2304"}, {"name": "num_layers", "val": ": typing.Optional[int] = 32"}, {"name": "num_attention_heads", "val": ": typing.Optional[int] = 32"}, {"name": "num_kv_heads", "val": ": typing.Optional[int] = None"}, {"name": "multiple_of", "val": ": typing.Optional[int] = 256"}, {"name": "ffn_dim_multiplier", "val": ": typing.Optional[float] = None"}, {"name": "norm_eps", "val": ": typing.Optional[float] = 1e-05"}, {"name": "learn_sigma", "val": ": typing.Optional[bool] = True"}, {"name": "qk_norm", "val": ": typing.Optional[bool] = True"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = 2048"}, {"name": "scaling_factor", "val": ": typing.Optional[float] = 1.0"}]</parameters><paramsdesc>- **sample_size** (`int`) -- The width of the latent images. This is fixed during training since
  it is used to learn a number of position embeddings.
- **patch_size** (`int`, *optional*, (`int`, *optional*, defaults to 2) --
  The size of each patch in the image. This parameter defines the resolution of patches fed into the model.
- **in_channels** (`int`, *optional*, defaults to 4) --
  The number of input channels for the model. Typically, this matches the number of channels in the input
  images.
- **hidden_size** (`int`, *optional*, defaults to 4096) --
  The dimensionality of the hidden layers in the model. This parameter determines the width of the model's
  hidden representations.
- **num_layers** (`int`, *optional*, default to 32) --
  The number of layers in the model. This defines the depth of the neural network.
- **num_attention_heads** (`int`, *optional*, defaults to 32) --
  The number of attention heads in each attention layer. This parameter specifies how many separate attention
  mechanisms are used.
- **num_kv_heads** (`int`, *optional*, defaults to 8) --
  The number of key-value heads in the attention mechanism, if different from the number of attention heads.
  If None, it defaults to num_attention_heads.
- **multiple_of** (`int`, *optional*, defaults to 256) --
  A factor that the hidden size should be a multiple of. This can help optimize certain hardware
  configurations.
- **ffn_dim_multiplier** (`float`, *optional*) --
  A multiplier for the dimensionality of the feed-forward network. If None, it uses a default value based on
  the model configuration.
- **norm_eps** (`float`, *optional*, defaults to 1e-5) --
  A small value added to the denominator for numerical stability in normalization layers.
- **learn_sigma** (`bool`, *optional*, defaults to True) --
  Whether the model should learn the sigma parameter, which might be related to uncertainty or variance in
  predictions.
- **qk_norm** (`bool`, *optional*, defaults to True) --
  Indicates if the queries and keys in the attention mechanism should be normalized.
- **cross_attention_dim** (`int`, *optional*, defaults to 2048) --
  The dimensionality of the text embeddings. This parameter defines the size of the text representations used
  in the model.
- **scaling_factor** (`float`, *optional*, defaults to 1.0) --
  A scaling factor applied to certain parameters or layers in the model. This can be used for adjusting the
  overall scale of the model's operations.</paramsdesc><paramgroups>0</paramgroups></docstring>

LuminaNextDiT: Diffusion model with a Transformer backbone.

Inherit ModelMixin and ConfigMixin to be compatible with the sampler StableDiffusionPipeline of diffusers.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.LuminaNextDiT2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/lumina_nextdit2d.py#L291</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "encoder_mask", "val": ": Tensor"}, {"name": "image_rotary_emb", "val": ": Tensor"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "return_dict", "val": " = True"}]</parameters><paramsdesc>- **hidden_states** (torch.Tensor) -- Input tensor of shape (N, C, H, W).
- **timestep** (torch.Tensor) -- Tensor of diffusion timesteps of shape (N,).
- **encoder_hidden_states** (torch.Tensor) -- Tensor of caption features of shape (N, D).
- **encoder_mask** (torch.Tensor) -- Tensor of caption masks of shape (N, L).</paramsdesc><paramgroups>0</paramgroups></docstring>

Forward pass of LuminaNextDiT.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/lumina_nextdit2d.md" />

### ControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet.md

# ControlNetModel

The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

## Loading from the original format

By default the [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) should be loaded with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained), but it can also be loaded
from the original format using `FromOriginalModelMixin.from_single_file` as follows:

```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel

url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth"  # can also be a local path
controlnet = ControlNetModel.from_single_file(url)

url = "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors"  # can also be a local path
pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet)
```

## ControlNetModel[[diffusers.ControlNetModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ControlNetModel</name><anchor>diffusers.ControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L109</source><parameters>[{"name": "in_channels", "val": ": int = 4"}, {"name": "conditioning_channels", "val": ": int = 3"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "freq_shift", "val": ": int = 0"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D')"}, {"name": "mid_block_type", "val": ": typing.Optional[str] = 'UNetMidBlock2DCrossAttn'"}, {"name": "only_cross_attention", "val": ": typing.Union[bool, typing.Tuple[bool]] = False"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": int = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": typing.Optional[int] = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 1280"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 1"}, {"name": "encoder_hid_dim", "val": ": typing.Optional[int] = None"}, {"name": "encoder_hid_dim_type", "val": ": typing.Optional[str] = None"}, {"name": "attention_head_dim", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...], NoneType] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "class_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_time_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "num_class_embeds", "val": ": typing.Optional[int] = None"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "resnet_time_scale_shift", "val": ": str = 'default'"}, {"name": "projection_class_embeddings_input_dim", "val": ": typing.Optional[int] = None"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "global_pool_conditions", "val": ": bool = False"}, {"name": "addition_embed_type_num_heads", "val": ": int = 64"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to 4) --
  The number of channels in the input sample.
- **flip_sin_to_cos** (`bool`, defaults to `True`) --
  Whether to flip the sin to cos in the time embedding.
- **freq_shift** (`int`, defaults to 0) --
  The frequency shift to apply to the time embedding.
- **down_block_types** (`tuple[str]`, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`) --
  The tuple of downsample blocks to use.
- **only_cross_attention** (`Union[bool, Tuple[bool]]`, defaults to `False`) --
- **block_out_channels** (`tuple[int]`, defaults to `(320, 640, 1280, 1280)`) --
  The tuple of output channels for each block.
- **layers_per_block** (`int`, defaults to 2) --
  The number of layers per block.
- **downsample_padding** (`int`, defaults to 1) --
  The padding to use for the downsampling convolution.
- **mid_block_scale_factor** (`float`, defaults to 1) --
  The scale factor to use for the mid block.
- **act_fn** (`str`, defaults to "silu") --
  The activation function to use.
- **norm_num_groups** (`int`, *optional*, defaults to 32) --
  The number of groups to use for the normalization. If None, normalization and activation layers is skipped
  in post-processing.
- **norm_eps** (`float`, defaults to 1e-5) --
  The epsilon to use for the normalization.
- **cross_attention_dim** (`int`, defaults to 1280) --
  The dimension of the cross attention features.
- **transformer_layers_per_block** (`int` or `Tuple[int]`, *optional*, defaults to 1) --
  The number of transformer blocks of type `BasicTransformerBlock`. Only relevant for
  `~models.unet_2d_blocks.CrossAttnDownBlock2D`, `~models.unet_2d_blocks.CrossAttnUpBlock2D`,
  `~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`.
- **encoder_hid_dim** (`int`, *optional*, defaults to None) --
  If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim`
  dimension to `cross_attention_dim`.
- **encoder_hid_dim_type** (`str`, *optional*, defaults to `None`) --
  If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text
  embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`.
- **attention_head_dim** (`Union[int, Tuple[int]]`, defaults to 8) --
  The dimension of the attention heads.
- **use_linear_projection** (`bool`, defaults to `False`) --
- **class_embed_type** (`str`, *optional*, defaults to `None`) --
  The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None,
  `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`.
- **addition_embed_type** (`str`, *optional*, defaults to `None`) --
  Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or
  "text". "text" will use the `TextTimeEmbedding` layer.
- **num_class_embeds** (`int`, *optional*, defaults to 0) --
  Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing
  class conditioning with `class_embed_type` equal to `None`.
- **upcast_attention** (`bool`, defaults to `False`) --
- **resnet_time_scale_shift** (`str`, defaults to `"default"`) --
  Time scale shift config for ResNet blocks (see `ResnetBlock2D`). Choose from `default` or `scale_shift`.
- **projection_class_embeddings_input_dim** (`int`, *optional*, defaults to `None`) --
  The dimension of the `class_labels` input when `class_embed_type="projection"`. Required when
  `class_embed_type="projection"`.
- **controlnet_conditioning_channel_order** (`str`, defaults to `"rgb"`) --
  The channel order of conditional image. Will convert to `rgb` if it's `bgr`.
- **conditioning_embedding_out_channels** (`tuple[int]`, *optional*, defaults to `(16, 32, 96, 256)`) --
  The tuple of output channel for each block in the `conditioning_embedding` layer.
- **global_pool_conditions** (`bool`, defaults to `False`) --
  TODO(Patrick) - unused parameter.
- **addition_embed_type_num_heads** (`int`, defaults to 64) --
  The number of heads to use for the `TextTimeEmbedding` layer.</paramsdesc><paramgroups>0</paramgroups></docstring>

A ControlNet model.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.ControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L660</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "class_labels", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Optional[typing.Dict[str, torch.Tensor]] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "guess_mode", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor.
- **timestep** (`Union[torch.Tensor, float, int]`) --
  The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states.
- **controlnet_cond** (`torch.Tensor`) --
  The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
  The scale factor for ControlNet outputs.
- **class_labels** (`torch.Tensor`, *optional*, defaults to `None`) --
  Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
- **timestep_cond** (`torch.Tensor`, *optional*, defaults to `None`) --
  Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the
  timestep_embedding passed through the `self.time_embedding` layer to obtain the final timestep
  embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **added_cond_kwargs** (`dict`) --
  Additional conditions for the Stable Diffusion XL UNet.
- **cross_attention_kwargs** (`dict[str]`, *optional*, defaults to `None`) --
  A kwargs dictionary that if specified is passed along to the `AttnProcessor`.
- **guess_mode** (`bool`, defaults to `False`) --
  In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if
  you remove all prompts. A `guidance_scale` between 3.0 and 5.0 is recommended.
- **return_dict** (`bool`, defaults to `True`) --
  Whether or not to return a [ControlNetOutput](/docs/diffusers/main/en/api/models/controlnet#diffusers.models.controlnets.ControlNetOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[ControlNetOutput](/docs/diffusers/main/en/api/models/controlnet#diffusers.models.controlnets.ControlNetOutput) **or** `tuple`</rettype><retdesc>If `return_dict` is `True`, a [ControlNetOutput](/docs/diffusers/main/en/api/models/controlnet#diffusers.models.controlnets.ControlNetOutput) is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

The [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_unet</name><anchor>diffusers.ControlNetModel.from_unet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L442</source><parameters>[{"name": "unet", "val": ": UNet2DConditionModel"}, {"name": "controlnet_conditioning_channel_order", "val": ": str = 'rgb'"}, {"name": "conditioning_embedding_out_channels", "val": ": typing.Optional[typing.Tuple[int, ...]] = (16, 32, 96, 256)"}, {"name": "load_weights_from_unet", "val": ": bool = True"}, {"name": "conditioning_channels", "val": ": int = 3"}]</parameters><paramsdesc>- **unet** (`UNet2DConditionModel`) --
  The UNet model weights to copy to the [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel). All configuration options are also copied
  where applicable.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) from [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attention_slice</name><anchor>diffusers.ControlNetModel.set_attention_slice</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L595</source><parameters>[{"name": "slice_size", "val": ": typing.Union[str, int, typing.List[int]]"}]</parameters><paramsdesc>- **slice_size** (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`) --
  When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If
  `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is
  provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`
  must be a multiple of `slice_size`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable sliced attention computation.

When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.ControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L544</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.ControlNetModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L579</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div></div>

## ControlNetOutput[[diffusers.models.controlnets.ControlNetOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.controlnets.ControlNetOutput</name><anchor>diffusers.models.controlnets.ControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet.py#L45</source><parameters>[{"name": "down_block_res_samples", "val": ": typing.Tuple[torch.Tensor]"}, {"name": "mid_block_res_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **down_block_res_samples** (`tuple[torch.Tensor]`) --
  A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should
  be of shape `(batch_size, channel * resolution, height //resolution, width // resolution)`. Output can be
  used to condition the original UNet's downsampling activations.
- **mid_down_block_re_sample** (`torch.Tensor`) --
  The activation of the middle block (the lowest sample resolution). Each tensor should be of shape
  `(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution)`.
  Output can be used to condition the original UNet's middle block activation.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet.md" />

### OmniGenTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/omnigen_transformer.md

# OmniGenTransformer2DModel

A Transformer model that accepts multimodal instructions to generate images for [OmniGen](https://github.com/VectorSpaceLab/OmniGen/).

The abstract from the paper is:

*The emergence of Large Language Models (LLMs) has unified language  generation tasks and revolutionized human-machine interaction.  However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism.  This work represents the first attempt at a general-purpose image generation model,  and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*

```python
import torch
from diffusers import OmniGenTransformer2DModel

transformer = OmniGenTransformer2DModel.from_pretrained("Shitao/OmniGen-v1-diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## OmniGenTransformer2DModel[[diffusers.OmniGenTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.OmniGenTransformer2DModel</name><anchor>diffusers.OmniGenTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_omnigen.py#L284</source><parameters>[{"name": "in_channels", "val": ": int = 4"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "hidden_size", "val": ": int = 3072"}, {"name": "rms_norm_eps", "val": ": float = 1e-05"}, {"name": "num_attention_heads", "val": ": int = 32"}, {"name": "num_key_value_heads", "val": ": int = 32"}, {"name": "intermediate_size", "val": ": int = 8192"}, {"name": "num_layers", "val": ": int = 32"}, {"name": "pad_token_id", "val": ": int = 32000"}, {"name": "vocab_size", "val": ": int = 32064"}, {"name": "max_position_embeddings", "val": ": int = 131072"}, {"name": "original_max_position_embeddings", "val": ": int = 4096"}, {"name": "rope_base", "val": ": int = 10000"}, {"name": "rope_scaling", "val": ": typing.Dict = None"}, {"name": "pos_embed_max_size", "val": ": int = 192"}, {"name": "time_step_dim", "val": ": int = 256"}, {"name": "flip_sin_to_cos", "val": ": bool = True"}, {"name": "downscale_freq_shift", "val": ": int = 0"}, {"name": "timestep_activation_fn", "val": ": str = 'silu'"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `4`) --
  The number of channels in the input.
- **patch_size** (`int`, defaults to `2`) --
  The size of the spatial patches to use in the patch embedding layer.
- **hidden_size** (`int`, defaults to `3072`) --
  The dimensionality of the hidden layers in the model.
- **rms_norm_eps** (`float`, defaults to `1e-5`) --
  Eps for RMSNorm layer.
- **num_attention_heads** (`int`, defaults to `32`) --
  The number of heads to use for multi-head attention.
- **num_key_value_heads** (`int`, defaults to `32`) --
  The number of heads to use for keys and values in multi-head attention.
- **intermediate_size** (`int`, defaults to `8192`) --
  Dimension of the hidden layer in FeedForward layers.
- **num_layers** (`int`, default to `32`) --
  The number of layers of transformer blocks to use.
- **pad_token_id** (`int`, default to `32000`) --
  The id of the padding token.
- **vocab_size** (`int`, default to `32064`) --
  The size of the vocabulary of the embedding vocabulary.
- **rope_base** (`int`, default to `10000`) --
  The default theta value to use when creating RoPE.
- **rope_scaling** (`Dict`, optional) --
  The scaling factors for the RoPE. Must contain `short_factor` and `long_factor`.
- **pos_embed_max_size** (`int`, default to `192`) --
  The maximum size of the positional embeddings.
- **time_step_dim** (`int`, default to `256`) --
  Output dimension of timestep embeddings.
- **flip_sin_to_cos** (`bool`, default to `True`) --
  Whether to flip the sin and cos in the positional embeddings when preparing timestep embeddings.
- **downscale_freq_shift** (`int`, default to `0`) --
  The frequency shift to use when downscaling the timestep embeddings.
- **timestep_activation_fn** (`str`, default to `silu`) --
  The activation function to use for the timestep embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in OmniGen (https://huggingface.co/papers/2409.11340).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/omnigen_transformer.md" />

### AutoencoderKLLTXVideo
https://huggingface.co/docs/diffusers/main/api/models/autoencoderkl_ltx_video.md

# AutoencoderKLLTXVideo

The 3D variational autoencoder (VAE) model with KL loss used in [LTX](https://huggingface.co/Lightricks/LTX-Video) was introduced by Lightricks.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLLTXVideo

vae = AutoencoderKLLTXVideo.from_pretrained("Lightricks/LTX-Video", subfolder="vae", torch_dtype=torch.float32).to("cuda")
```

## AutoencoderKLLTXVideo[[diffusers.AutoencoderKLLTXVideo]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLLTXVideo</name><anchor>diffusers.AutoencoderKLLTXVideo</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1037</source><parameters>[{"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "latent_channels", "val": ": int = 128"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512)"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('LTXVideoDownBlock3D', 'LTXVideoDownBlock3D', 'LTXVideoDownBlock3D', 'LTXVideoDownBlock3D')"}, {"name": "decoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512)"}, {"name": "layers_per_block", "val": ": typing.Tuple[int, ...] = (4, 3, 3, 3, 4)"}, {"name": "decoder_layers_per_block", "val": ": typing.Tuple[int, ...] = (4, 3, 3, 3, 4)"}, {"name": "spatio_temporal_scaling", "val": ": typing.Tuple[bool, ...] = (True, True, True, False)"}, {"name": "decoder_spatio_temporal_scaling", "val": ": typing.Tuple[bool, ...] = (True, True, True, False)"}, {"name": "decoder_inject_noise", "val": ": typing.Tuple[bool, ...] = (False, False, False, False, False)"}, {"name": "downsample_type", "val": ": typing.Tuple[str, ...] = ('conv', 'conv', 'conv', 'conv')"}, {"name": "upsample_residual", "val": ": typing.Tuple[bool, ...] = (False, False, False, False)"}, {"name": "upsample_factor", "val": ": typing.Tuple[int, ...] = (1, 1, 1, 1)"}, {"name": "timestep_conditioning", "val": ": bool = False"}, {"name": "patch_size", "val": ": int = 4"}, {"name": "patch_size_t", "val": ": int = 1"}, {"name": "resnet_norm_eps", "val": ": float = 1e-06"}, {"name": "scaling_factor", "val": ": float = 1.0"}, {"name": "encoder_causal", "val": ": bool = True"}, {"name": "decoder_causal", "val": ": bool = False"}, {"name": "spatial_compression_ratio", "val": ": int = None"}, {"name": "temporal_compression_ratio", "val": ": int = None"}]</parameters><paramsdesc>- **in_channels** (`int`, defaults to `3`) --
  Number of input channels.
- **out_channels** (`int`, defaults to `3`) --
  Number of output channels.
- **latent_channels** (`int`, defaults to `128`) --
  Number of latent channels.
- **block_out_channels** (`Tuple[int, ...]`, defaults to `(128, 256, 512, 512)`) --
  The number of output channels for each block.
- **spatio_temporal_scaling** (`Tuple[bool, ...], defaults to `(True, True, True, False)` --
  Whether a block should contain spatio-temporal downscaling or not.
- **layers_per_block** (`Tuple[int, ...]`, defaults to `(4, 3, 3, 3, 4)`) --
  The number of layers per block.
- **patch_size** (`int`, defaults to `4`) --
  The size of spatial patches.
- **patch_size_t** (`int`, defaults to `1`) --
  The size of temporal patches.
- **resnet_norm_eps** (`float`, defaults to `1e-6`) --
  Epsilon value for ResNet normalization layers.
- **scaling_factor** (`float`, *optional*, defaults to `1.0`) --
  The component-wise standard deviation of the trained latent space computed using the first batch of the
  training set. This is used to scale the latent space to have unit variance when training the diffusion
  model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
  diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
  / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
  Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
- **encoder_causal** (`bool`, defaults to `True`) --
  Whether the encoder should behave causally (future frames depend only on past frames) or not.
- **decoder_causal** (`bool`, defaults to `False`) --
  Whether the decoder should behave causally (future frames depend only on past frames) or not.</paramsdesc><paramgroups>0</paramgroups></docstring>

A VAE model with KL loss for encoding images into latents and decoding latent representations into images. Used in
[LTX](https://huggingface.co/Lightricks/LTX-Video).

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLLTXVideo.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLLTXVideo.encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLLTXVideo.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1236</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLLTXVideo.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1222</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLLTXVideo.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1229</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLLTXVideo.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1188</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_num_frames", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_num_frames", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLLTXVideo.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1412</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "temb", "val": ": typing.Optional[torch.Tensor]"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLLTXVideo.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_ltx.py#L1361</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.








</div></div>

## AutoencoderKLOutput[[diffusers.models.modeling_outputs.AutoencoderKLOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.AutoencoderKLOutput</name><anchor>diffusers.models.modeling_outputs.AutoencoderKLOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L7</source><parameters>[{"name": "latent_dist", "val": ": DiagonalGaussianDistribution"}]</parameters><paramsdesc>- **latent_dist** (`DiagonalGaussianDistribution`) --
  Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
  `DiagonalGaussianDistribution` allows for sampling latents from the distribution.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of AutoencoderKL encoding method.




</div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoderkl_ltx_video.md" />

### SkyReelsV2Transformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/skyreels_v2_transformer_3d.md

# SkyReelsV2Transformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [SkyReels-V2](https://github.com/SkyworkAI/SkyReels-V2) by the Skywork AI.

The model can be loaded with the following code snippet.

```python
from diffusers import SkyReelsV2Transformer3DModel

transformer = SkyReelsV2Transformer3DModel.from_pretrained("Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## SkyReelsV2Transformer3DModel[[diffusers.SkyReelsV2Transformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SkyReelsV2Transformer3DModel</name><anchor>diffusers.SkyReelsV2Transformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_skyreels_v2.py#L518</source><parameters>[{"name": "patch_size", "val": ": typing.Tuple[int] = (1, 2, 2)"}, {"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "text_dim", "val": ": int = 4096"}, {"name": "freq_dim", "val": ": int = 256"}, {"name": "ffn_dim", "val": ": int = 8192"}, {"name": "num_layers", "val": ": int = 32"}, {"name": "cross_attn_norm", "val": ": bool = True"}, {"name": "qk_norm", "val": ": typing.Optional[str] = 'rms_norm_across_heads'"}, {"name": "eps", "val": ": float = 1e-06"}, {"name": "image_dim", "val": ": typing.Optional[int] = None"}, {"name": "added_kv_proj_dim", "val": ": typing.Optional[int] = None"}, {"name": "rope_max_seq_len", "val": ": int = 1024"}, {"name": "pos_embed_seq_len", "val": ": typing.Optional[int] = None"}, {"name": "inject_sample_info", "val": ": bool = False"}, {"name": "num_frame_per_block", "val": ": int = 1"}]</parameters><paramsdesc>- **patch_size** (`Tuple[int]`, defaults to `(1, 2, 2)`) --
  3D patch dimensions for video embedding (t_patch, h_patch, w_patch).
- **num_attention_heads** (`int`, defaults to `16`) --
  Fixed length for text embeddings.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of channels in each head.
- **in_channels** (`int`, defaults to `16`) --
  The number of channels in the input.
- **out_channels** (`int`, defaults to `16`) --
  The number of channels in the output.
- **text_dim** (`int`, defaults to `4096`) --
  Input dimension for text embeddings.
- **freq_dim** (`int`, defaults to `256`) --
  Dimension for sinusoidal time embeddings.
- **ffn_dim** (`int`, defaults to `8192`) --
  Intermediate dimension in feed-forward network.
- **num_layers** (`int`, defaults to `32`) --
  The number of layers of transformer blocks to use.
- **window_size** (`Tuple[int]`, defaults to `(-1, -1)`) --
  Window size for local attention (-1 indicates global attention).
- **cross_attn_norm** (`bool`, defaults to `True`) --
  Enable cross-attention normalization.
- **qk_norm** (`str`, *optional*, defaults to `"rms_norm_across_heads"`) --
  Enable query/key normalization.
- **eps** (`float`, defaults to `1e-6`) --
  Epsilon value for normalization layers.
- **inject_sample_info** (`bool`, defaults to `False`) --
  Whether to inject sample information into the model.
- **image_dim** (`int`, *optional*) --
  The dimension of the image embeddings.
- **added_kv_proj_dim** (`int`, *optional*) --
  The dimension of the added key/value projection.
- **rope_max_seq_len** (`int`, defaults to `1024`) --
  The maximum sequence length for the rotary embeddings.
- **pos_embed_seq_len** (`int`, *optional*) --
  The sequence length for the positional embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

A Transformer model for video-like data used in the Wan-based SkyReels-V2 model.




</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/skyreels_v2_transformer_3d.md" />

### FluxTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/flux_transformer.md

# FluxTransformer2DModel

A Transformer model for image-like data from [Flux](https://blackforestlabs.ai/announcing-black-forest-labs/).

## FluxTransformer2DModel[[diffusers.FluxTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FluxTransformer2DModel</name><anchor>diffusers.FluxTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_flux.py#L525</source><parameters>[{"name": "patch_size", "val": ": int = 1"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 19"}, {"name": "num_single_layers", "val": ": int = 38"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "pooled_projection_dim", "val": ": int = 768"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "axes_dims_rope", "val": ": typing.Tuple[int, int, int] = (16, 56, 56)"}]</parameters><paramsdesc>- **patch_size** (`int`, defaults to `1`) --
  Patch size to turn the input data into small patches.
- **in_channels** (`int`, defaults to `64`) --
  The number of channels in the input.
- **out_channels** (`int`, *optional*, defaults to `None`) --
  The number of channels in the output. If not specified, it defaults to `in_channels`.
- **num_layers** (`int`, defaults to `19`) --
  The number of layers of dual stream DiT blocks to use.
- **num_single_layers** (`int`, defaults to `38`) --
  The number of layers of single stream DiT blocks to use.
- **attention_head_dim** (`int`, defaults to `128`) --
  The number of dimensions to use for each attention head.
- **num_attention_heads** (`int`, defaults to `24`) --
  The number of attention heads to use.
- **joint_attention_dim** (`int`, defaults to `4096`) --
  The number of dimensions to use for the joint attention (embedding/channel dimension of
  `encoder_hidden_states`).
- **pooled_projection_dim** (`int`, defaults to `768`) --
  The number of dimensions to use for the pooled projection.
- **guidance_embeds** (`bool`, defaults to `False`) --
  Whether to use guidance embeddings for guidance-distilled variant of the model.
- **axes_dims_rope** (`Tuple[int]`, defaults to `(16, 56, 56)`) --
  The dimensions to use for the rotary positional embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in Flux.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.FluxTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_flux.py#L637</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_blocks_repeat", "val": ": bool = False"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch_size, image_sequence_length, in_channels)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch_size, text_sequence_length, joint_attention_dim)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **pooled_projections** (`torch.Tensor` of shape `(batch_size, projection_dim)`) -- Embeddings projected
  from the embeddings of input conditions.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **block_controlnet_hidden_states** -- (`list` of `torch.Tensor`):
  A list of tensors that if specified are added to the residuals of transformer blocks.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel) forward method.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/flux_transformer.md" />

### AutoencoderKLWan
https://huggingface.co/docs/diffusers/main/api/models/autoencoder_kl_wan.md

# AutoencoderKLWan

The 3D variational autoencoder (VAE) model with KL loss used in [Wan 2.1](https://github.com/Wan-Video/Wan2.1) by the Alibaba Wan Team.

The model can be loaded with the following code snippet.

```python
from diffusers import AutoencoderKLWan

vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
```

## AutoencoderKLWan[[diffusers.AutoencoderKLWan]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoencoderKLWan</name><anchor>diffusers.AutoencoderKLWan</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L954</source><parameters>[{"name": "base_dim", "val": ": int = 96"}, {"name": "decoder_base_dim", "val": ": typing.Optional[int] = None"}, {"name": "z_dim", "val": ": int = 16"}, {"name": "dim_mult", "val": ": typing.Tuple[int] = [1, 2, 4, 4]"}, {"name": "num_res_blocks", "val": ": int = 2"}, {"name": "attn_scales", "val": ": typing.List[float] = []"}, {"name": "temperal_downsample", "val": ": typing.List[bool] = [False, True, True]"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "latents_mean", "val": ": typing.List[float] = [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921]"}, {"name": "latents_std", "val": ": typing.List[float] = [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916]"}, {"name": "is_residual", "val": ": bool = False"}, {"name": "in_channels", "val": ": int = 3"}, {"name": "out_channels", "val": ": int = 3"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "scale_factor_temporal", "val": ": typing.Optional[int] = 4"}, {"name": "scale_factor_spatial", "val": ": typing.Optional[int] = 8"}]</parameters></docstring>

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos.
Introduced in [Wan 2.1].

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.AutoencoderKLWan.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.AutoencoderKLWan.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1127</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.AutoencoderKLWan.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1113</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.AutoencoderKLWan.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1120</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.AutoencoderKLWan.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1083</source><parameters>[{"name": "tile_sample_min_height", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_min_width", "val": ": typing.Optional[int] = None"}, {"name": "tile_sample_stride_height", "val": ": typing.Optional[float] = None"}, {"name": "tile_sample_stride_width", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **tile_sample_min_height** (`int`, *optional*) --
  The minimum height required for a sample to be separated into tiles across the height dimension.
- **tile_sample_min_width** (`int`, *optional*) --
  The minimum width required for a sample to be separated into tiles across the width dimension.
- **tile_sample_stride_height** (`int`, *optional*) --
  The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are
  no tiling artifacts produced across the height dimension.
- **tile_sample_stride_width** (`int`, *optional*) --
  The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling
  artifacts produced across the width dimension.</paramsdesc><paramgroups>0</paramgroups></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.AutoencoderKLWan.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1399</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_decode</name><anchor>diffusers.AutoencoderKLWan.tiled_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1336</source><parameters>[{"name": "z", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **z** (`torch.Tensor`) -- Input batch of latent vectors.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.vae.DecoderOutput` instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~models.vae.DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `~models.vae.DecoderOutput` is returned, otherwise a plain `tuple` is
returned.</retdesc></docstring>

Decode a batch of images using a tiled decoder.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.AutoencoderKLWan.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_wan.py#L1270</source><parameters>[{"name": "x", "val": ": Tensor"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of videos.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The latent representation of the encoded videos.</retdesc></docstring>
Encode a batch of images using a tiled encoder.








</div></div>

## DecoderOutput[[diffusers.models.autoencoders.vae.DecoderOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.autoencoders.vae.DecoderOutput</name><anchor>diffusers.models.autoencoders.vae.DecoderOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/vae.py#L47</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "commit_loss", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)`) --
  The decoded output sample from the last layer of the model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output of decoding method.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/autoencoder_kl_wan.md" />

### UNetMotionModel
https://huggingface.co/docs/diffusers/main/api/models/unet-motion.md

# UNetMotionModel

The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model.

The abstract from the paper is:

*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*

## UNetMotionModel[[diffusers.UNetMotionModel]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UNetMotionModel</name><anchor>diffusers.UNetMotionModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1198</source><parameters>[{"name": "sample_size", "val": ": typing.Optional[int] = None"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "down_block_types", "val": ": typing.Tuple[str, ...] = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion')"}, {"name": "up_block_types", "val": ": typing.Tuple[str, ...] = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion')"}, {"name": "block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1280, 1280)"}, {"name": "layers_per_block", "val": ": typing.Union[int, typing.Tuple[int]] = 2"}, {"name": "downsample_padding", "val": ": int = 1"}, {"name": "mid_block_scale_factor", "val": ": float = 1"}, {"name": "act_fn", "val": ": str = 'silu'"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "cross_attention_dim", "val": ": int = 1280"}, {"name": "transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple]] = 1"}, {"name": "reverse_transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple], NoneType] = None"}, {"name": "temporal_transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple]] = 1"}, {"name": "reverse_temporal_transformer_layers_per_block", "val": ": typing.Union[int, typing.Tuple[int], typing.Tuple[typing.Tuple], NoneType] = None"}, {"name": "transformer_layers_per_mid_block", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = None"}, {"name": "temporal_transformer_layers_per_mid_block", "val": ": typing.Union[int, typing.Tuple[int], NoneType] = 1"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "motion_max_seq_length", "val": ": int = 32"}, {"name": "motion_num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...]] = 8"}, {"name": "reverse_motion_num_attention_heads", "val": ": typing.Union[int, typing.Tuple[int, ...], typing.Tuple[typing.Tuple[int, ...], ...], NoneType] = None"}, {"name": "use_motion_mid_block", "val": ": bool = True"}, {"name": "mid_block_layers", "val": ": int = 1"}, {"name": "encoder_hid_dim", "val": ": typing.Optional[int] = None"}, {"name": "encoder_hid_dim_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_embed_type", "val": ": typing.Optional[str] = None"}, {"name": "addition_time_embed_dim", "val": ": typing.Optional[int] = None"}, {"name": "projection_class_embeddings_input_dim", "val": ": typing.Optional[int] = None"}, {"name": "time_cond_proj_dim", "val": ": typing.Optional[int] = None"}]</parameters></docstring>

A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a
sample shaped output.

This model inherits from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_freeu</name><anchor>diffusers.UNetMotionModel.disable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1899</source><parameters>[]</parameters></docstring>
Disables the FreeU mechanism.

</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_forward_chunking</name><anchor>diffusers.UNetMotionModel.enable_forward_chunking</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1817</source><parameters>[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]</parameters><paramsdesc>- **chunk_size** (`int`, *optional*) --
  The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
  over each tensor of dim=`dim`.
- **dim** (`int`, *optional*, defaults to `0`) --
  The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
  or dim=1 (sequence length).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use [feed forward
chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_freeu</name><anchor>diffusers.UNetMotionModel.enable_freeu</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1874</source><parameters>[{"name": "s1", "val": ": float"}, {"name": "s2", "val": ": float"}, {"name": "b1", "val": ": float"}, {"name": "b2", "val": ": float"}]</parameters><paramsdesc>- **s1** (`float`) --
  Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **s2** (`float`) --
  Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
  mitigate the "oversmoothing effect" in the enhanced denoising process.
- **b1** (`float`) -- Scaling factor for stage 1 to amplify the contributions of backbone features.
- **b2** (`float`) -- Scaling factor for stage 2 to amplify the contributions of backbone features.</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the FreeU mechanism from https://huggingface.co/papers/2309.11497.

The suffixes after the scaling factors represent the stage blocks where they are being applied.

Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that
are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.UNetMotionModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1939</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[torch.Tensor, float, int]"}, {"name": "encoder_hidden_states", "val": ": Tensor"}, {"name": "timestep_cond", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Optional[typing.Dict[str, torch.Tensor]] = None"}, {"name": "down_block_additional_residuals", "val": ": typing.Optional[typing.Tuple[torch.Tensor]] = None"}, {"name": "mid_block_additional_residual", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The noisy input tensor with the following shape `(batch, num_frames, channel, height, width`.
- **timestep** (`torch.Tensor` or `float` or `int`) -- The number of timesteps to denoise an input.
- **encoder_hidden_states** (`torch.Tensor`) --
  The encoder hidden states with shape `(batch, sequence_length, feature_dim)`.
- **timestep_cond** -- (`torch.Tensor`, *optional*, defaults to `None`):
  Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed
  through the `self.time_embedding` layer to obtain the timestep embeddings.
- **attention_mask** (`torch.Tensor`, *optional*, defaults to `None`) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **cross_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **down_block_additional_residuals** -- (`tuple` of `torch.Tensor`, *optional*):
  A tuple of tensors that if specified are added to the residuals of down unet blocks.
- **mid_block_additional_residual** -- (`torch.Tensor`, *optional*):
  A tensor that if specified is added to the residual of the middle unet block.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `UNetMotionOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`UNetMotionOutput` or `tuple`</rettype><retdesc>If `return_dict` is True, an `UNetMotionOutput` is returned,
otherwise a `tuple` is returned where the first element is the sample tensor.</retdesc></docstring>

The [UNetMotionModel](/docs/diffusers/main/en/api/models/unet-motion#diffusers.UNetMotionModel) forward method.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>freeze_unet2d_params</name><anchor>diffusers.UNetMotionModel.freeze_unet2d_params</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1688</source><parameters>[]</parameters></docstring>
Freeze the weights of just the UNet2DConditionModel, and leave the motion modules
unfrozen for fine tuning.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.UNetMotionModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1908</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.UNetMotionModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1783</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.UNetMotionModel.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1858</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.UNetMotionModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_motion_model.py#L1930</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## UNet3DConditionOutput[[diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput</name><anchor>diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_3d_condition.py#L49</source><parameters>[{"name": "sample", "val": ": Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, num_frames, height, width)`) --
  The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [UNet3DConditionModel](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.UNet3DConditionModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/unet-motion.md" />

### AllegroTransformer3DModel
https://huggingface.co/docs/diffusers/main/api/models/allegro_transformer3d.md

# AllegroTransformer3DModel

A Diffusion Transformer model for 3D data from [Allegro](https://github.com/rhymes-ai/Allegro) was introduced in [Allegro: Open the Black Box of Commercial-Level Video Generation Model](https://huggingface.co/papers/2410.15458) by RhymesAI.

The model can be loaded with the following code snippet.

```python
from diffusers import AllegroTransformer3DModel

transformer = AllegroTransformer3DModel.from_pretrained("rhymes-ai/Allegro", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
```

## AllegroTransformer3DModel[[diffusers.AllegroTransformer3DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AllegroTransformer3DModel</name><anchor>diffusers.AllegroTransformer3DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_allegro.py#L176</source><parameters>[{"name": "patch_size", "val": ": int = 2"}, {"name": "patch_size_t", "val": ": int = 1"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "attention_head_dim", "val": ": int = 96"}, {"name": "in_channels", "val": ": int = 4"}, {"name": "out_channels", "val": ": int = 4"}, {"name": "num_layers", "val": ": int = 32"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "cross_attention_dim", "val": ": int = 2304"}, {"name": "attention_bias", "val": ": bool = True"}, {"name": "sample_height", "val": ": int = 90"}, {"name": "sample_width", "val": ": int = 160"}, {"name": "sample_frames", "val": ": int = 22"}, {"name": "activation_fn", "val": ": str = 'gelu-approximate'"}, {"name": "norm_elementwise_affine", "val": ": bool = False"}, {"name": "norm_eps", "val": ": float = 1e-06"}, {"name": "caption_channels", "val": ": int = 4096"}, {"name": "interpolation_scale_h", "val": ": float = 2.0"}, {"name": "interpolation_scale_w", "val": ": float = 2.0"}, {"name": "interpolation_scale_t", "val": ": float = 2.2"}]</parameters></docstring>


</div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/allegro_transformer3d.md" />

### BriaTransformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/bria_transformer.md

# BriaTransformer2DModel

A modified flux Transformer model from [Bria](https://huggingface.co/briaai/BRIA-3.2)

## BriaTransformer2DModel[[diffusers.BriaTransformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.BriaTransformer2DModel</name><anchor>diffusers.BriaTransformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_bria.py#L506</source><parameters>[{"name": "patch_size", "val": ": int = 1"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "num_layers", "val": ": int = 19"}, {"name": "num_single_layers", "val": ": int = 38"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "pooled_projection_dim", "val": ": int = None"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "axes_dims_rope", "val": ": typing.List[int] = [16, 56, 56]"}, {"name": "rope_theta", "val": " = 10000"}, {"name": "time_theta", "val": " = 10000"}]</parameters><paramsdesc>- **patch_size** (`int`) -- Patch size to turn the input data into small patches.
- **in_channels** (`int`, *optional*, defaults to 16) -- The number of channels in the input.
- **num_layers** (`int`, *optional*, defaults to 18) -- The number of layers of MMDiT blocks to use.
- **num_single_layers** (`int`, *optional*, defaults to 18) -- The number of layers of single DiT blocks to use.
- **attention_head_dim** (`int`, *optional*, defaults to 64) -- The number of channels in each head.
- **num_attention_heads** (`int`, *optional*, defaults to 18) -- The number of heads to use for multi-head attention.
- **joint_attention_dim** (`int`, *optional*) -- The number of `encoder_hidden_states` dimensions to use.
- **pooled_projection_dim** (`int`) -- Number of dimensions to use when projecting the `pooled_projections`.
- **guidance_embeds** (`bool`, defaults to False) -- Whether to use guidance embeddings.</paramsdesc><paramgroups>0</paramgroups></docstring>

The Transformer model introduced in Flux. Based on FluxPipeline with several changes:
- no pooled embeddings
- We use zero padding for prompts
- No guidance embedding since this is not a distilled version
Reference: https://blackforestlabs.ai/announcing-black-forest-labs/





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.BriaTransformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_bria.py#L584</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}]</parameters><paramsdesc>- **hidden_states** (`torch.FloatTensor` of shape `(batch size, channel, height, width)`) --
  Input `hidden_states`.
- **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch size, sequence_len, embed_dims)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **pooled_projections** (`torch.FloatTensor` of shape `(batch_size, projection_dim)`) -- Embeddings projected
  from the embeddings of input conditions.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **block_controlnet_hidden_states** -- (`list` of `torch.Tensor`):
  A list of tensors that if specified are added to the residuals of transformer blocks.
- **attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [BriaTransformer2DModel](/docs/diffusers/main/en/api/models/bria_transformer#diffusers.BriaTransformer2DModel) forward method.






</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/bria_transformer.md" />

### Transformer2DModel
https://huggingface.co/docs/diffusers/main/api/models/transformer2d.md

# Transformer2DModel

A Transformer model for image-like data from [CompVis](https://huggingface.co/CompVis) that is based on the [Vision Transformer](https://huggingface.co/papers/2010.11929) introduced by Dosovitskiy et al. The [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.

When the input is **continuous**:

1. Project the input and reshape it to `(batch_size, sequence_length, feature_dimension)`.
2. Apply the Transformer blocks in the standard way.
3. Reshape to image.

When the input is **discrete**:

> [!TIP]
> It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.

1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
2. Apply the Transformer blocks in the standard way.
3. Predict classes of unnoised image.

## Transformer2DModel[[diffusers.Transformer2DModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.Transformer2DModel</name><anchor>diffusers.Transformer2DModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_2d.py#L39</source><parameters>[{"name": "num_attention_heads", "val": ": int = 16"}, {"name": "attention_head_dim", "val": ": int = 88"}, {"name": "in_channels", "val": ": typing.Optional[int] = None"}, {"name": "out_channels", "val": ": typing.Optional[int] = None"}, {"name": "num_layers", "val": ": int = 1"}, {"name": "dropout", "val": ": float = 0.0"}, {"name": "norm_num_groups", "val": ": int = 32"}, {"name": "cross_attention_dim", "val": ": typing.Optional[int] = None"}, {"name": "attention_bias", "val": ": bool = False"}, {"name": "sample_size", "val": ": typing.Optional[int] = None"}, {"name": "num_vector_embeds", "val": ": typing.Optional[int] = None"}, {"name": "patch_size", "val": ": typing.Optional[int] = None"}, {"name": "activation_fn", "val": ": str = 'geglu'"}, {"name": "num_embeds_ada_norm", "val": ": typing.Optional[int] = None"}, {"name": "use_linear_projection", "val": ": bool = False"}, {"name": "only_cross_attention", "val": ": bool = False"}, {"name": "double_self_attention", "val": ": bool = False"}, {"name": "upcast_attention", "val": ": bool = False"}, {"name": "norm_type", "val": ": str = 'layer_norm'"}, {"name": "norm_elementwise_affine", "val": ": bool = True"}, {"name": "norm_eps", "val": ": float = 1e-05"}, {"name": "attention_type", "val": ": str = 'default'"}, {"name": "caption_channels", "val": ": int = None"}, {"name": "interpolation_scale", "val": ": float = None"}, {"name": "use_additional_conditions", "val": ": typing.Optional[bool] = None"}]</parameters><paramsdesc>- **num_attention_heads** (`int`, *optional*, defaults to 16) -- The number of heads to use for multi-head attention.
- **attention_head_dim** (`int`, *optional*, defaults to 88) -- The number of channels in each head.
- **in_channels** (`int`, *optional*) --
  The number of channels in the input and output (specify if the input is **continuous**).
- **num_layers** (`int`, *optional*, defaults to 1) -- The number of layers of Transformer blocks to use.
- **dropout** (`float`, *optional*, defaults to 0.0) -- The dropout probability to use.
- **cross_attention_dim** (`int`, *optional*) -- The number of `encoder_hidden_states` dimensions to use.
- **sample_size** (`int`, *optional*) -- The width of the latent images (specify if the input is **discrete**).
  This is fixed during training since it is used to learn a number of position embeddings.
- **num_vector_embeds** (`int`, *optional*) --
  The number of classes of the vector embeddings of the latent pixels (specify if the input is **discrete**).
  Includes the class for the masked latent pixel.
- **activation_fn** (`str`, *optional*, defaults to `"geglu"`) -- Activation function to use in feed-forward.
- **num_embeds_ada_norm** ( `int`, *optional*) --
  The number of diffusion steps used during training. Pass if at least one of the norm_layers is
  `AdaLayerNorm`. This is fixed during training since it is used to learn a number of embeddings that are
  added to the hidden states.

  During inference, you can denoise for up to but not more steps than `num_embeds_ada_norm`.
- **attention_bias** (`bool`, *optional*) --
  Configure if the `TransformerBlocks` attention should contain a bias parameter.</paramsdesc><paramgroups>0</paramgroups></docstring>

A 2D Transformer model for image-like data.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.Transformer2DModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_2d.py#L324</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "added_cond_kwargs", "val": ": typing.Dict[str, torch.Tensor] = None"}, {"name": "class_labels", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "cross_attention_kwargs", "val": ": typing.Dict[str, typing.Any] = None"}, {"name": "attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.Tensor` of shape `(batch size, channel, height, width)` if continuous) --
  Input `hidden_states`.
- **encoder_hidden_states** ( `torch.Tensor` of shape `(batch size, sequence len, embed dims)`, *optional*) --
  Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
  self-attention.
- **timestep** ( `torch.LongTensor`, *optional*) --
  Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
- **class_labels** ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*) --
  Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
  `AdaLayerZeroNorm`.
- **cross_attention_kwargs** ( `Dict[str, Any]`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **attention_mask** ( `torch.Tensor`, *optional*) --
  An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
  is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
  negative values to the attention scores corresponding to "discard" tokens.
- **encoder_attention_mask** ( `torch.Tensor`, *optional*) --
  Cross-attention mask applied to `encoder_hidden_states`. Two formats supported:

  * Mask `(batch, sequence_length)` True = keep, False = discard.
  * Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard.

  If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format
  above. This bias will be added to the cross-attention scores.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [UNet2DConditionOutput](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput) instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `Transformer2DModelOutput` is returned,
otherwise a `tuple` where the first element is the sample tensor.</retdesc></docstring>

The [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) forward method.






</div></div>

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.modeling_outputs.Transformer2DModelOutput</name><anchor>diffusers.models.modeling_outputs.Transformer2DModelOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21</source><parameters>[{"name": "sample", "val": ": torch.Tensor"}]</parameters><paramsdesc>- **sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) --
  The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
  distributions for the unnoised latent pixels.</paramsdesc><paramgroups>0</paramgroups></docstring>

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/transformer2d.md" />

### SD3ControlNetModel
https://huggingface.co/docs/diffusers/main/api/models/controlnet_sd3.md

# SD3ControlNetModel

SD3ControlNetModel is an implementation of ControlNet for Stable Diffusion 3.

The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

## Loading from the original format

By default the [SD3ControlNetModel](/docs/diffusers/main/en/api/models/controlnet_sd3#diffusers.SD3ControlNetModel) should be loaded with [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained).

```py
from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel

controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)
```

## SD3ControlNetModel[[diffusers.SD3ControlNetModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SD3ControlNetModel</name><anchor>diffusers.SD3ControlNetModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L42</source><parameters>[{"name": "sample_size", "val": ": int = 128"}, {"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "num_layers", "val": ": int = 18"}, {"name": "attention_head_dim", "val": ": int = 64"}, {"name": "num_attention_heads", "val": ": int = 18"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "caption_projection_dim", "val": ": int = 1152"}, {"name": "pooled_projection_dim", "val": ": int = 2048"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "pos_embed_max_size", "val": ": int = 96"}, {"name": "extra_conditioning_channels", "val": ": int = 0"}, {"name": "dual_attention_layers", "val": ": typing.Tuple[int, ...] = ()"}, {"name": "qk_norm", "val": ": typing.Optional[str] = None"}, {"name": "pos_embed_type", "val": ": typing.Optional[str] = 'sincos'"}, {"name": "use_pos_embed", "val": ": bool = True"}, {"name": "force_zeros_for_pooled_projection", "val": ": bool = True"}]</parameters><paramsdesc>- **sample_size** (`int`, defaults to `128`) --
  The width/height of the latents. This is fixed during training since it is used to learn a number of
  position embeddings.
- **patch_size** (`int`, defaults to `2`) --
  Patch size to turn the input data into small patches.
- **in_channels** (`int`, defaults to `16`) --
  The number of latent channels in the input.
- **num_layers** (`int`, defaults to `18`) --
  The number of layers of transformer blocks to use.
- **attention_head_dim** (`int`, defaults to `64`) --
  The number of channels in each head.
- **num_attention_heads** (`int`, defaults to `18`) --
  The number of heads to use for multi-head attention.
- **joint_attention_dim** (`int`, defaults to `4096`) --
  The embedding dimension to use for joint text-image attention.
- **caption_projection_dim** (`int`, defaults to `1152`) --
  The embedding dimension of caption embeddings.
- **pooled_projection_dim** (`int`, defaults to `2048`) --
  The embedding dimension of pooled text projections.
- **out_channels** (`int`, defaults to `16`) --
  The number of latent channels in the output.
- **pos_embed_max_size** (`int`, defaults to `96`) --
  The maximum latent height/width of positional embeddings.
- **extra_conditioning_channels** (`int`, defaults to `0`) --
  The number of extra channels to use for conditioning for patch embedding.
- **dual_attention_layers** (`Tuple[int, ...]`, defaults to `()`) --
  The number of dual-stream transformer blocks to use.
- **qk_norm** (`str`, *optional*, defaults to `None`) --
  The normalization to use for query and key in the attention layer. If `None`, no normalization is used.
- **pos_embed_type** (`str`, defaults to `"sincos"`) --
  The type of positional embedding to use. Choose between `"sincos"` and `None`.
- **use_pos_embed** (`bool`, defaults to `True`) --
  Whether to use positional embeddings.
- **force_zeros_for_pooled_projection** (`bool`, defaults to `True`) --
  Whether to force zeros for pooled projection embeddings. This is handled in the pipelines by reading the
  config value of the ControlNet model.</paramsdesc><paramgroups>0</paramgroups></docstring>

ControlNet model for [Stable Diffusion 3](https://huggingface.co/papers/2403.03206).





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_forward_chunking</name><anchor>diffusers.SD3ControlNetModel.enable_forward_chunking</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L178</source><parameters>[{"name": "chunk_size", "val": ": typing.Optional[int] = None"}, {"name": "dim", "val": ": int = 0"}]</parameters><paramsdesc>- **chunk_size** (`int`, *optional*) --
  The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
  over each tensor of dim=`dim`.
- **dim** (`int`, *optional*, defaults to `0`) --
  The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
  or dim=1 (sequence length).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use [feed forward
chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.SD3ControlNetModel.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L332</source><parameters>[{"name": "hidden_states", "val": ": Tensor"}, {"name": "controlnet_cond", "val": ": Tensor"}, {"name": "conditioning_scale", "val": ": float = 1.0"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **hidden_states** (`torch.Tensor` of shape `(batch size, channel, height, width)`) --
  Input `hidden_states`.
- **controlnet_cond** (`torch.Tensor`) --
  The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`.
- **conditioning_scale** (`float`, defaults to `1.0`) --
  The scale factor for ControlNet outputs.
- **encoder_hidden_states** (`torch.Tensor` of shape `(batch size, sequence_len, embed_dims)`) --
  Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
- **pooled_projections** (`torch.Tensor` of shape `(batch_size, projection_dim)`) -- Embeddings projected
  from the embeddings of input conditions.
- **timestep** ( `torch.LongTensor`) --
  Used to indicate denoising step.
- **joint_attention_kwargs** (`dict`, *optional*) --
  A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
  `self.processor` in
  [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~models.transformer_2d.Transformer2DModelOutput` instead of a plain
  tuple.</paramsdesc><paramgroups>0</paramgroups><retdesc>If `return_dict` is True, an `~models.transformer_2d.Transformer2DModelOutput` is returned, otherwise a
`tuple` where the first element is the sample tensor.</retdesc></docstring>

The [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel) forward method.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_qkv_projections</name><anchor>diffusers.SD3ControlNetModel.fuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L268</source><parameters>[]</parameters></docstring>

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value)
are fused. For cross-attention modules, key and value projection matrices are fused.

> [!WARNING] > This API is 🧪 experimental.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.SD3ControlNetModel.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L233</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_qkv_projections</name><anchor>diffusers.SD3ControlNetModel.unfuse_qkv_projections</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L290</source><parameters>[]</parameters></docstring>
Disables the fused QKV projection if enabled.

> [!WARNING] > This API is 🧪 experimental.



</div></div>

## SD3ControlNetOutput[[diffusers.models.controlnets.SD3ControlNetOutput]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.models.controlnets.SD3ControlNetOutput</name><anchor>diffusers.models.controlnets.SD3ControlNetOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/controlnets/controlnet_sd3.py#L38</source><parameters>[{"name": "controlnet_block_samples", "val": ": typing.Tuple[torch.Tensor]"}]</parameters></docstring>


</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/controlnet_sd3.md" />

### Consistency Decoder
https://huggingface.co/docs/diffusers/main/api/models/consistency_decoder_vae.md

# Consistency Decoder

Consistency decoder can be used to decode the latents from the denoising UNet in the [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline). This decoder was introduced in the [DALL-E 3 technical report](https://openai.com/dall-e-3).

The original codebase can be found at [openai/consistencydecoder](https://github.com/openai/consistencydecoder).

> [!WARNING]
> Inference is only supported for 2 iterations as of now.

The pipeline could not have been contributed without the help of [madebyollin](https://github.com/madebyollin) and [mrsteyk](https://github.com/mrsteyk) from [this issue](https://github.com/openai/consistencydecoder/issues/1).

## ConsistencyDecoderVAE[[diffusers.ConsistencyDecoderVAE]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ConsistencyDecoderVAE</name><anchor>diffusers.ConsistencyDecoderVAE</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L52</source><parameters>[{"name": "scaling_factor", "val": ": float = 0.18215"}, {"name": "latent_channels", "val": ": int = 4"}, {"name": "sample_size", "val": ": int = 32"}, {"name": "encoder_act_fn", "val": ": str = 'silu'"}, {"name": "encoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (128, 256, 512, 512)"}, {"name": "encoder_double_z", "val": ": bool = True"}, {"name": "encoder_down_block_types", "val": ": typing.Tuple[str, ...] = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D')"}, {"name": "encoder_in_channels", "val": ": int = 3"}, {"name": "encoder_layers_per_block", "val": ": int = 2"}, {"name": "encoder_norm_num_groups", "val": ": int = 32"}, {"name": "encoder_out_channels", "val": ": int = 4"}, {"name": "decoder_add_attention", "val": ": bool = False"}, {"name": "decoder_block_out_channels", "val": ": typing.Tuple[int, ...] = (320, 640, 1024, 1024)"}, {"name": "decoder_down_block_types", "val": ": typing.Tuple[str, ...] = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D')"}, {"name": "decoder_downsample_padding", "val": ": int = 1"}, {"name": "decoder_in_channels", "val": ": int = 7"}, {"name": "decoder_layers_per_block", "val": ": int = 3"}, {"name": "decoder_norm_eps", "val": ": float = 1e-05"}, {"name": "decoder_norm_num_groups", "val": ": int = 32"}, {"name": "decoder_num_train_timesteps", "val": ": int = 1024"}, {"name": "decoder_out_channels", "val": ": int = 6"}, {"name": "decoder_resnet_time_scale_shift", "val": ": str = 'scale_shift'"}, {"name": "decoder_time_embedding_type", "val": ": str = 'learned'"}, {"name": "decoder_up_block_types", "val": ": typing.Tuple[str, ...] = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D')"}]</parameters></docstring>

The consistency decoder used with DALL-E 3.

<ExampleCodeBlock anchor="diffusers.ConsistencyDecoderVAE.example">

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE

>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16)
>>> pipe = StableDiffusionPipeline.from_pretrained(
...     "stable-diffusion-v1-5/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16
... ).to("cuda")

>>> image = pipe("horse", generator=torch.manual_seed(0)).images[0]
>>> image
```

</ExampleCodeBlock>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>wrapper</name><anchor>diffusers.ConsistencyDecoderVAE.decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/accelerate_utils.py#L43</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_slicing</name><anchor>diffusers.ConsistencyDecoderVAE.disable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L196</source><parameters>[]</parameters></docstring>

Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_tiling</name><anchor>diffusers.ConsistencyDecoderVAE.disable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L180</source><parameters>[]</parameters></docstring>

Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
decoding in one step.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_slicing</name><anchor>diffusers.ConsistencyDecoderVAE.enable_slicing</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L188</source><parameters>[]</parameters></docstring>

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_tiling</name><anchor>diffusers.ConsistencyDecoderVAE.enable_tiling</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L171</source><parameters>[{"name": "use_tiling", "val": ": bool = True"}]</parameters></docstring>

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>forward</name><anchor>diffusers.ConsistencyDecoderVAE.forward</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L430</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sample_posterior", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- Input sample.
- **sample_posterior** (`bool`, *optional*, defaults to `False`) --
  Whether to sample from the posterior.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `DecoderOutput` instead of a plain tuple.
- **generator** (`torch.Generator`, *optional*, defaults to `None`) --
  Generator to use for sampling.</paramsdesc><paramgroups>0</paramgroups><rettype>`DecoderOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `DecoderOutput` is returned, otherwise a plain `tuple` is returned.</retdesc></docstring>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_attn_processor</name><anchor>diffusers.ConsistencyDecoderVAE.set_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L229</source><parameters>[{"name": "processor", "val": ": typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.JointAttnProcessor2_0, diffusers.models.attention_processor.PAGJointAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGJointAttnProcessor2_0, diffusers.models.attention_processor.FusedJointAttnProcessor2_0, diffusers.models.attention_processor.AllegroAttnProcessor2_0, diffusers.models.attention_processor.AuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FusedAuraFlowAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0, diffusers.models.attention_processor.FluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0, diffusers.models.attention_processor.FusedFluxAttnProcessor2_0_NPU, diffusers.models.attention_processor.CogVideoXAttnProcessor2_0, diffusers.models.attention_processor.FusedCogVideoXAttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.XLAFlashAttnProcessor2_0, diffusers.models.attention_processor.AttnProcessorNPU, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.MochiVaeAttnProcessor2_0, diffusers.models.attention_processor.MochiAttnProcessor2_0, diffusers.models.attention_processor.StableAudioAttnProcessor2_0, diffusers.models.attention_processor.HunyuanAttnProcessor2_0, diffusers.models.attention_processor.FusedHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGHunyuanAttnProcessor2_0, diffusers.models.attention_processor.LuminaAttnProcessor2_0, diffusers.models.attention_processor.FusedAttnProcessor2_0, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.SanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleLinearAttention, diffusers.models.attention_processor.SanaMultiscaleAttnProcessor2_0, diffusers.models.attention_processor.SanaMultiscaleAttentionProjection, diffusers.models.attention_processor.IPAdapterAttnProcessor, diffusers.models.attention_processor.IPAdapterAttnProcessor2_0, diffusers.models.attention_processor.IPAdapterXFormersAttnProcessor, diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0, diffusers.models.attention_processor.PAGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]]"}]</parameters><paramsdesc>- **processor** (`dict` of `AttentionProcessor` or only `AttentionProcessor`) --
  The instantiated processor class or a dictionary of processor classes that will be set as the processor
  for **all** `Attention` layers.

  If `processor` is a dict, the key needs to define the path to the corresponding cross attention
  processor. This is strongly recommended when setting trainable attention processors.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the attention processor to use to compute attention.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_default_attn_processor</name><anchor>diffusers.ConsistencyDecoderVAE.set_default_attn_processor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L264</source><parameters>[]</parameters></docstring>

Disables custom attention processors and sets the default attention implementation.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>tiled_encode</name><anchor>diffusers.ConsistencyDecoderVAE.tiled_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/consistency_decoder_vae.py#L375</source><parameters>[{"name": "x", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **x** (`torch.Tensor`) -- Input batch of images.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `ConsistencyDecoderVAEOutput`
  instead of a plain tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`ConsistencyDecoderVAEOutput` or `tuple`</rettype><retdesc>If return_dict is True, a `ConsistencyDecoderVAEOutput`
is returned, otherwise a plain `tuple` is returned.</retdesc></docstring>
Encode a batch of images using a tiled encoder.

When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the
output, but they should be much less noticeable.








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/consistency_decoder_vae.md" />

### AutoModel
https://huggingface.co/docs/diffusers/main/api/models/auto_model.md

# AutoModel

The `AutoModel` is designed to make it easy to load a checkpoint without needing to know the specific model class. `AutoModel` automatically retrieves the correct model class from the checkpoint `config.json` file.

```python
from diffusers import AutoModel, AutoPipelineForText2Image

unet = AutoModel.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", subfolder="unet")
pipe = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", unet=unet)
```


## AutoModel[[diffusers.AutoModel]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.AutoModel</name><anchor>diffusers.AutoModel</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/auto_model.py#L28</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.AutoModel.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/auto_model.py#L38</source><parameters>[{"name": "pretrained_model_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **torch_dtype** (`torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info** (`bool`, *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only(`bool`,** *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **from_flax** (`bool`, *optional*, defaults to `False`) --
  Load the model weights from a Flax checkpoint save file.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.
- **device_map** (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*) --
  A map that specifies where each submodule should go. It doesn't need to be defined for each
  parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
  same device. Defaults to `None`, meaning that the model will be loaded on CPU.

  Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
  more information about each option see [designing a device
  map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
- **max_memory** (`Dict`, *optional*) --
  A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
  each GPU and the available CPU RAM if unset.
- **offload_folder** (`str` or `os.PathLike`, *optional*) --
  The path to offload weights if `device_map` contains the value `"disk"`.
- **offload_state_dict** (`bool`, *optional*) --
  If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
  the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
  when there is some disk offload.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.
- **variant** (`str`, *optional*) --
  Load weights from a specified `variant` filename such as `"fp16"` or `"ema"`. This is ignored when
  loading `from_flax`.
- **use_safetensors** (`bool`, *optional*, defaults to `None`) --
  If set to `None`, the `safetensors` weights are downloaded if they're available **and** if the
  `safetensors` library is installed. If set to `True`, the model is forcibly loaded from `safetensors`
  weights. If set to `False`, `safetensors` weights are not loaded.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive, which may not handle the seeky-ness of mmap very well.
- **trust_remote_cocde** (`bool`, *optional*, defaults to `False`) --
  Whether to trust remote code</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a pretrained PyTorch model from a pretrained model configuration.

The model is set in evaluation mode - `model.eval()` - by default, and dropout modules are deactivated. To
train the model, set it back in training mode with `model.train()`.



> [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`. You can also activate the special >
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a >
firewalled environment.

<ExampleCodeBlock anchor="diffusers.AutoModel.from_pretrained.example">

Example:

```py
from diffusers import AutoModel

unet = AutoModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet")
```

</ExampleCodeBlock>

<ExampleCodeBlock anchor="diffusers.AutoModel.from_pretrained.example-2">

If you get the error message below, you need to finetune the weights for your downstream task:

```bash
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/models/auto_model.md" />

### PNDMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/pndm.md

# PNDMScheduler

`PNDMScheduler`, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at [crowsonkb/k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181).

## PNDMScheduler[[diffusers.PNDMScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.PNDMScheduler</name><anchor>diffusers.PNDMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L72</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "skip_prk_steps", "val": ": bool = False"}, {"name": "set_alpha_to_one", "val": ": bool = False"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **skip_prk_steps** (`bool`, defaults to `False`) --
  Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before
  PLMS steps.
- **set_alpha_to_one** (`bool`, defaults to `False`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process)
  or `v_prediction` (see section 2.4 of [Imagen Video](https://imagen.research.google/video/paper.pdf)
  paper).
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

`PNDMScheduler` uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step
method.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.PNDMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L390</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.PNDMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L166</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.PNDMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L226</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise), and calls [step_prk()](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler.step_prk)
or [step_plms()](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler.step_plms) depending on the internal variable `counter`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_plms</name><anchor>diffusers.PNDMScheduler.step_plms</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L319</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the linear multistep method. It performs one forward pass multiple times to approximate the solution.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_prk</name><anchor>diffusers.PNDMScheduler.step_prk</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py#L259</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential
equation.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/pndm.md" />

### RePaintScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/repaint.md

# RePaintScheduler

`RePaintScheduler` is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the `RePaintPipeline`, and it is based on the paper [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2201.09865) by Andreas Lugmayr et al.

The abstract from the paper is:

*Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: [this http URL](http://git.io/RePaint).*

The original implementation can be found at [andreas128/RePaint](https://github.com/andreas128/).

## RePaintScheduler[[diffusers.RePaintScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.RePaintScheduler</name><anchor>diffusers.RePaintScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_repaint.py#L91</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "trained_betas", "val": ": typing.Optional[numpy.ndarray] = None"}, {"name": "clip_sample", "val": ": bool = True"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, `squaredcos_cap_v2`, or `sigmoid`.
- **eta** (`float`) --
  The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds
  to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample between -1 and 1 for numerical stability.</paramsdesc><paramgroups>0</paramgroups></docstring>

`RePaintScheduler` is a scheduler for DDPM inpainting inside a given mask.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.RePaintScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_repaint.py#L163</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.RePaintScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_repaint.py#L180</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "jump_length", "val": ": int = 10"}, {"name": "jump_n_sample", "val": ": int = 10"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model. If used,
  `timesteps` must be `None`.
- **jump_length** (`int`, defaults to 10) --
  The number of steps taken forward in time before going backward in time for a single jump (“j” in
  RePaint paper). Take a look at Figure 9 and 10 in the paper.
- **jump_n_sample** (`int`, defaults to 10) --
  The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9
  and 10 in the paper.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.RePaintScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_repaint.py#L246</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "original_image", "val": ": Tensor"}, {"name": "mask", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **original_image** (`torch.Tensor`) --
  The original image to inpaint on.
- **mask** (`torch.Tensor`) --
  The mask where a value of 0.0 indicates which part of the original image to inpaint.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [RePaintSchedulerOutput](/docs/diffusers/main/en/api/schedulers/repaint#diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[RePaintSchedulerOutput](/docs/diffusers/main/en/api/schedulers/repaint#diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [RePaintSchedulerOutput](/docs/diffusers/main/en/api/schedulers/repaint#diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput) is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## RePaintSchedulerOutput[[diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_repaint.py#L29</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample (x_{0}) based on the model output from
  the current timestep. `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's step function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/repaint.md" />

### Schedulers
https://huggingface.co/docs/diffusers/main/api/schedulers/overview.md

# Schedulers

🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model's output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward *n* timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be *discrete* in which case the timestep is an `int` or *continuous* in which case the timestep is a `float`.

Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output:

- during *training*, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model
- during *inference*, a scheduler defines how to update a sample based on a pretrained model's output

Many schedulers are implemented from the [k-diffusion](https://github.com/crowsonkb/k-diffusion) library by [Katherine Crowson](https://github.com/crowsonkb/), and they're also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below:

| A1111/k-diffusion    | 🤗 Diffusers                         | Usage                                                                                                         |
|---------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------|
| DPM++ 2M            | [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler)     |                                                                                                               |
| DPM++ 2M Karras     | [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler)     | init with `use_karras_sigmas=True`                                                                            |
| DPM++ 2M SDE        | [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler)     | init with `algorithm_type="sde-dpmsolver++"`                                                                  |
| DPM++ 2M SDE Karras | [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler)     | init with `use_karras_sigmas=True` and `algorithm_type="sde-dpmsolver++"`                                     |
| DPM++ 2S a          | N/A                                 | very similar to  `DPMSolverSinglestepScheduler`                         |
| DPM++ 2S a Karras   | N/A                                 | very similar to  `DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...)` |
| DPM++ SDE           | [DPMSolverSinglestepScheduler](/docs/diffusers/main/en/api/schedulers/singlestep_dpm_solver#diffusers.DPMSolverSinglestepScheduler)    |                                                                                                               |
| DPM++ SDE Karras    | [DPMSolverSinglestepScheduler](/docs/diffusers/main/en/api/schedulers/singlestep_dpm_solver#diffusers.DPMSolverSinglestepScheduler)    | init with `use_karras_sigmas=True`                                                                            |
| DPM2                | [KDPM2DiscreteScheduler](/docs/diffusers/main/en/api/schedulers/dpm_discrete#diffusers.KDPM2DiscreteScheduler)          |                                                                                                               |
| DPM2 Karras         | [KDPM2DiscreteScheduler](/docs/diffusers/main/en/api/schedulers/dpm_discrete#diffusers.KDPM2DiscreteScheduler)          | init with `use_karras_sigmas=True`                                                                            |
| DPM2 a              | [KDPM2AncestralDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/dpm_discrete_ancestral#diffusers.KDPM2AncestralDiscreteScheduler) |                                                                                                               |
| DPM2 a Karras       | [KDPM2AncestralDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/dpm_discrete_ancestral#diffusers.KDPM2AncestralDiscreteScheduler) | init with `use_karras_sigmas=True`                                                                            |
| DPM adaptive        | N/A                                 |                                                                                                               |
| DPM fast            | N/A                                 |                                                                                                               |
| Euler               | [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler)          |                                                                                                               |
| Euler a             | [EulerAncestralDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.EulerAncestralDiscreteScheduler) |                                                                                                               |
| Heun                | [HeunDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/heun#diffusers.HeunDiscreteScheduler)           |                                                                                                               |
| LMS                 | [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler)            |                                                                                                               |
| LMS Karras          | [LMSDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler)            | init with `use_karras_sigmas=True`                                                                            |
| N/A                 | [DEISMultistepScheduler](/docs/diffusers/main/en/api/schedulers/deis#diffusers.DEISMultistepScheduler)          |                                                                                                               |
| N/A                 | [UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler)         |                                                                                                               |

## Noise schedules and schedule types
| A1111/k-diffusion        | 🤗 Diffusers                                                               |
|--------------------------|----------------------------------------------------------------------------|
| Karras                   | init with `use_karras_sigmas=True`                                         |
| sgm_uniform              | init with `timestep_spacing="trailing"`                                    |
| simple                   | init with `timestep_spacing="trailing"`                                    |
| exponential              | init with `timestep_spacing="linspace"`, `use_exponential_sigmas=True`     |
| beta                     | init with `timestep_spacing="linspace"`, `use_beta_sigmas=True`            |

All schedulers are built from the base [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) class which implements low level utilities shared by all schedulers.

## SchedulerMixin[[diffusers.SchedulerMixin]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.SchedulerMixin</name><anchor>diffusers.SchedulerMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L75</source><parameters>[]</parameters></docstring>

Base class for all schedulers.

[SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) contains common functions shared by all schedulers such as general loading and saving
functionalities.

[ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin) takes care of storing the configuration attributes (like `num_train_timesteps`) that are passed to
the scheduler's `__init__` function, and the attributes can be accessed by `scheduler.config.num_train_timesteps`.

Class attributes:
- **_compatibles** (`List[str]`) -- A list of scheduler classes that are compatible with the parent scheduler
  class. Use [from_config()](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) to load a different compatible scheduler class (should be overridden
  by parent class).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_pretrained</name><anchor>diffusers.SchedulerMixin.from_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L95</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, os.PathLike, NoneType] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the scheduler
    configuration saved with [save_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.save_pretrained).
- **subfolder** (`str`, *optional*) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **return_unused_kwargs** (`bool`, *optional*, defaults to `False`) --
  Whether kwargs that are not consumed by the Python class should be returned or not.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **output_loading_info(`bool`,** *optional*, defaults to `False`) --
  Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
- **local_files_only(`bool`,** *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository.



> [!TIP] > To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in
with `hf > auth login`. You can also activate the special >
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a >
firewalled environment.



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_pretrained</name><anchor>diffusers.SchedulerMixin.save_pretrained</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L156</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "push_to_hub", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory where the configuration JSON file will be saved (will be created if it does not exist).
- **push_to_hub** (`bool`, *optional*, defaults to `False`) --
  Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
  repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
  namespace).
- **kwargs** (`Dict[str, Any]`, *optional*) --
  Additional keyword arguments passed along to the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save a scheduler configuration object to a directory so that it can be reloaded using the
[from_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.from_pretrained) class method.




</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

## KarrasDiffusionSchedulers

`KarrasDiffusionSchedulers` are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed.

The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given [here](https://github.com/huggingface/diffusers/blob/a69754bb879ed55b9b6dc9dd0b3cf4fa4124c765/src/diffusers/schedulers/scheduling_utils.py#L32).

## PushToHubMixin[[diffusers.utils.PushToHubMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.utils.PushToHubMixin</name><anchor>diffusers.utils.PushToHubMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L464</source><parameters>[]</parameters></docstring>

A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>push_to_hub</name><anchor>diffusers.utils.PushToHubMixin.push_to_hub</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/hub_utils.py#L499</source><parameters>[{"name": "repo_id", "val": ": str"}, {"name": "commit_message", "val": ": typing.Optional[str] = None"}, {"name": "private", "val": ": typing.Optional[bool] = None"}, {"name": "token", "val": ": typing.Optional[str] = None"}, {"name": "create_pr", "val": ": bool = False"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "variant", "val": ": typing.Optional[str] = None"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **repo_id** (`str`) --
  The name of the repository you want to push your model, scheduler, or pipeline files to. It should
  contain your organization name when pushing to an organization. `repo_id` can also be a path to a local
  directory.
- **commit_message** (`str`, *optional*) --
  Message to commit while pushing. Default to `"Upload {object}"`.
- **private** (`bool`, *optional*) --
  Whether to make the repo private. If `None` (default), the repo will be public unless the
  organization's default is private. This value is ignored if the repo already exists.
- **token** (`str`, *optional*) --
  The token to use as HTTP bearer authorization for remote files. The token generated when running `hf
  auth login` (stored in `~/.huggingface`).
- **create_pr** (`bool`, *optional*, defaults to `False`) --
  Whether or not to create a PR with the uploaded files or directly commit.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether or not to convert the model weights to the `safetensors` format.
- **variant** (`str`, *optional*) --
  If specified, weights are saved in the format `pytorch_model.<variant>.bin`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub.



<ExampleCodeBlock anchor="diffusers.utils.PushToHubMixin.push_to_hub.example">

Examples:

```python
from diffusers import UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet")

# Push the `unet` to your namespace with the name "my-finetuned-unet".
unet.push_to_hub("my-finetuned-unet")

# Push the `unet` to an organization with the name "my-finetuned-unet".
unet.push_to_hub("your-org/my-finetuned-unet")
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/overview.md" />

### DDIMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/ddim.md

# DDIMScheduler

[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.

The abstract from the paper is:

*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample.
To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models
with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process.
We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from.
We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*

The original codebase of this paper can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim), and you can contact the author on [tsong.me](https://tsong.me/).

## Tips

The paper [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose:

> [!WARNING]
> 🧪 This is an experimental feature!

1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR)

```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
```

2. train a model with `v_prediction` (add the following argument to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts)

```bash
--prediction_type="v_prediction"
```

3. change the sampler to always start from the last timestep

```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
```

4. rescale classifier-free guidance to prevent over-exposure

```py
image = pipe(prompt, guidance_rescale=0.7).images[0]
```

For example:

```py
from diffusers import DiffusionPipeline, DDIMScheduler
import torch

pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config(
    pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipe.to("cuda")

prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipe(prompt, guidance_rescale=0.7).images[0]
image
```

## DDIMScheduler[[diffusers.DDIMScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DDIMScheduler</name><anchor>diffusers.DDIMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py#L131</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "clip_sample", "val": ": bool = True"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DDIMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
non-Markovian guidance.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DDIMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py#L236</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DDIMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py#L297</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DDIMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py#L342</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "use_clipped_model_output", "val": ": bool = False"}, {"name": "generator", "val": " = None"}, {"name": "variance_noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **eta** (`float`) --
  The weight of noise for added noise in diffusion step.
- **use_clipped_model_output** (`bool`, defaults to `False`) --
  If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary
  because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no
  clipping has happened, "corrected" `model_output` would coincide with the one provided as input and
  `use_clipped_model_output` has no effect.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `CycleDiffusion`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## DDIMSchedulerOutput[[diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py#L33</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddim.md" />

### UniPCMultistepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/unipc.md

# UniPCMultistepScheduler

`UniPCMultistepScheduler` is a training-free framework designed for fast sampling of diffusion models. It was introduced in [UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models](https://huggingface.co/papers/2302.04867) by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu.

It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders.
UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy.

The abstract from the paper is:

*Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at [this https URL](https://github.com/wl-zhao/UniPC).*

## Tips

It is recommended to set `solver_order` to 2 for guide sampling, and `solver_order=3` for unconditional sampling.

Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space
diffusion models, you can set both `predict_x0=True` and `thresholding=True` to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion.

## UniPCMultistepScheduler[[diffusers.UniPCMultistepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.UniPCMultistepScheduler</name><anchor>diffusers.UniPCMultistepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L115</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "predict_x0", "val": ": bool = True"}, {"name": "solver_type", "val": ": str = 'bh2'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "disable_corrector", "val": ": typing.List[int] = []"}, {"name": "solver_p", "val": ": SchedulerMixin = None"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_flow_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "flow_shift", "val": ": typing.Optional[float] = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "final_sigmas_type", "val": ": typing.Optional[str] = 'zero'"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "use_dynamic_shifting", "val": ": bool = False"}, {"name": "time_shift_type", "val": ": str = 'exponential'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **solver_order** (`int`, default `2`) --
  The UniPC order which can be any positive integer. The effective order of accuracy is `solver_order + 1`
  due to the UniC. It is recommended to use `solver_order=2` for guided sampling, and `solver_order=3` for
  unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True` and `predict_x0=True`.
- **predict_x0** (`bool`, defaults to `True`) --
  Whether to use the updating algorithm on the predicted x0.
- **solver_type** (`str`, default `bh2`) --
  Solver type for UniPC. It is recommended to use `bh1` for unconditional sampling when steps < 10, and `bh2`
  otherwise.
- **lower_order_final** (`bool`, default `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **disable_corrector** (`list`, default `[]`) --
  Decides which step to disable the corrector to mitigate the misalignment between `epsilon_theta(x_t, c)`
  and `epsilon_theta(x_t^c, c)` which can influence convergence for a large guidance scale. Corrector is
  usually disabled during the first few steps.
- **solver_p** (`SchedulerMixin`, default `None`) --
  Any other scheduler that if specified, the algorithm becomes `solver_p + UniC`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **use_flow_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use flow sigmas for step sizes in the noise schedule during the sampling process.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`UniPCMultistepScheduler` is a training-free framework designed for the fast sampling of diffusion models.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.UniPCMultistepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L581</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the UniPC algorithm needs.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_uni_c_bh_update</name><anchor>diffusers.UniPCMultistepScheduler.multistep_uni_c_bh_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L783</source><parameters>[{"name": "this_model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "last_sample", "val": ": Tensor = None"}, {"name": "this_sample", "val": ": Tensor = None"}, {"name": "order", "val": ": int = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **this_model_output** (`torch.Tensor`) --
  The model outputs at `x_t`.
- **this_timestep** (`int`) --
  The current timestep `t`.
- **last_sample** (`torch.Tensor`) --
  The generated sample before the last predictor `x_{t-1}`.
- **this_sample** (`torch.Tensor`) --
  The generated sample after the last predictor `x_{t}`.
- **order** (`int`) --
  The `p` of UniC-p at this step. The effective order of accuracy should be `order + 1`.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The corrected sample tensor at the current timestep.</retdesc></docstring>

One step for the UniC (B(h) version).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_uni_p_bh_update</name><anchor>diffusers.UniPCMultistepScheduler.multistep_uni_p_bh_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L654</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "order", "val": ": int = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model at the current timestep.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **order** (`int`) --
  The order of UniP at this timestep (corresponds to the *p* in UniPC-p).</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the UniP (B(h) version). Alternatively, `self.solver_p` is used if is specified.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.UniPCMultistepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L1034</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.UniPCMultistepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L295</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.UniPCMultistepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L305</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.UniPCMultistepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_unipc_multistep.py#L953</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep UniPC.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/unipc.md" />

### IPNDMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/ipndm.md

# IPNDMScheduler

`IPNDMScheduler` is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at [crowsonkb/v-diffusion-pytorch](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296).

## IPNDMScheduler[[diffusers.IPNDMScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.IPNDMScheduler</name><anchor>diffusers.IPNDMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ipndm.py#L25</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.</paramsdesc><paramgroups>0</paramgroups></docstring>

A fourth-order Improved Pseudo Linear Multistep scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.IPNDMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ipndm.py#L196</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.IPNDMScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ipndm.py#L76</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.IPNDMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ipndm.py#L86</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.IPNDMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ipndm.py#L138</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the linear multistep method. It performs one forward pass multiple times to approximate the solution.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ipndm.md" />

### Latent Consistency Model Multistep Scheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/lcm.md

# Latent Consistency Model Multistep Scheduler

## Overview

Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://huggingface.co/papers/2310.04378) by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.
This scheduler should be able to generate good samples from [LatentConsistencyModelPipeline](/docs/diffusers/main/en/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelPipeline) in 1-8 steps.

## LCMScheduler[[diffusers.LCMScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LCMScheduler</name><anchor>diffusers.LCMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py#L134</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'scaled_linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "original_inference_steps", "val": ": int = 50"}, {"name": "clip_sample", "val": ": bool = False"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "timestep_scaling", "val": ": float = 10.0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **original_inference_steps** (`int`, *optional*, defaults to 50) --
  The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
  will ultimately take `num_inference_steps` evenly spaced timesteps to form the final timestep schedule.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **timestep_scaling** (`float`, defaults to 10.0) --
  The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions
  `c_skip` and `c_out`. Increasing this will decrease the approximation error (although the approximation
  error at the default of `10.0` is already pretty small).
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`LCMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
non-Markovian guidance.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). [~ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin) takes care of storing all config
attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be
accessed via `scheduler.config.num_train_timesteps`. [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) provides general loading and saving
functionality via the [SchedulerMixin.save_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.save_pretrained) and [from_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.from_pretrained) functions.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.LCMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py#L299</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.LCMScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py#L289</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.LCMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py#L349</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "original_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "strength", "val": ": int = 1.0"}]</parameters><paramsdesc>- **num_inference_steps** (`int`, *optional*) --
  The number of diffusion steps used when generating samples with a pre-trained model. If used,
  `timesteps` must be `None`.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps, which will be used to generate a linearly-spaced timestep
  schedule (which is different from the standard `diffusers` implementation). We will then take
  `num_inference_steps` timesteps from this schedule, evenly spaced in terms of indices, and use that as
  our final timestep schedule. If not set, this will default to the `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
  timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep
  schedule is used. If `timesteps` is passed, `num_inference_steps` must be `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.LCMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py#L497</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `LCMSchedulerOutput` or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_utils.LCMSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `LCMSchedulerOutput` is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/lcm.md" />

### DPMSolverMultistepInverse
https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver_inverse.md

# DPMSolverMultistepInverse

`DPMSolverMultistepInverse` is the inverted scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.

The implementation is mostly based on the DDIM inversion definition of [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://huggingface.co/papers/2211.09794) and notebook implementation of the `DiffEdit` latent inversion from [Xiang-cd/DiffEdit-stable-diffusion](https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/diffedit.ipynb).

## Tips

Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space
diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic
thresholding. This thresholding method is unsuitable for latent-space diffusion models such as
Stable Diffusion.

## DPMSolverMultistepInverseScheduler[[diffusers.DPMSolverMultistepInverseScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DPMSolverMultistepInverseScheduler</name><anchor>diffusers.DPMSolverMultistepInverseScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L78</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "algorithm_type", "val": ": str = 'dpmsolver++'"}, {"name": "solver_type", "val": ": str = 'midpoint'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "euler_at_final", "val": ": bool = False"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_flow_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "flow_shift", "val": ": typing.Optional[float] = 1.0"}, {"name": "lambda_min_clipped", "val": ": float = -inf"}, {"name": "variance_type", "val": ": typing.Optional[str] = None"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **solver_order** (`int`, defaults to 2) --
  The DPMSolver order which can be `1` or `2` or `3`. It is recommended to use `solver_order=2` for guided
  sampling, and `solver_order=3` for unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True` and
  `algorithm_type="dpmsolver++"`.
- **algorithm_type** (`str`, defaults to `dpmsolver++`) --
  Algorithm type for the solver; can be `dpmsolver`, `dpmsolver++`, `sde-dpmsolver` or `sde-dpmsolver++`. The
  `dpmsolver` type implements the algorithms in the [DPMSolver](https://huggingface.co/papers/2206.00927)
  paper, and the `dpmsolver++` type implements the algorithms in the
  [DPMSolver++](https://huggingface.co/papers/2211.01095) paper. It is recommended to use `dpmsolver++` or
  `sde-dpmsolver++` with `solver_order=2` for guided sampling like in Stable Diffusion.
- **solver_type** (`str`, defaults to `midpoint`) --
  Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the
  sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **euler_at_final** (`bool`, defaults to `False`) --
  Whether to use Euler's method in the final step. It is a trade-off between numerical stability and detail
  richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
  steps, but sometimes may result in blurring.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **lambda_min_clipped** (`float`, defaults to `-inf`) --
  Clipping threshold for the minimum value of `lambda(t)` for numerical stability. This is critical for the
  cosine (`squaredcos_cap_v2`) noise schedule.
- **variance_type** (`str`, *optional*) --
  Set to "learned" or "learned_range" for diffusion models that predict variance. If set, the model's output
  contains the predicted Gaussian variance.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

`DPMSolverMultistepInverseScheduler` is the reverse scheduler of [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler).

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L482</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model.

> [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dpm_solver_first_order_update</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.dpm_solver_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L581</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DPMSolver (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_second_order_update</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.multistep_dpm_solver_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L651</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_third_order_update</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.multistep_dpm_solver_third_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L775</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the third-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L972</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L245</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DPMSolverMultistepInverseScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py#L889</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "variance_noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `CycleDiffusion`.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep DPMSolver.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/multistep_dpm_solver_inverse.md" />

### VQDiffusionScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/vq_diffusion.md

# VQDiffusionScheduler

`VQDiffusionScheduler` converts the transformer model's output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://huggingface.co/papers/2111.14822) by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo.

The abstract from the paper is:

*We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality.*

## VQDiffusionScheduler[[diffusers.VQDiffusionScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.VQDiffusionScheduler</name><anchor>diffusers.VQDiffusionScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L106</source><parameters>[{"name": "num_vec_classes", "val": ": int"}, {"name": "num_train_timesteps", "val": ": int = 100"}, {"name": "alpha_cum_start", "val": ": float = 0.99999"}, {"name": "alpha_cum_end", "val": ": float = 9e-06"}, {"name": "gamma_cum_start", "val": ": float = 9e-06"}, {"name": "gamma_cum_end", "val": ": float = 0.99999"}]</parameters><paramsdesc>- **num_vec_classes** (`int`) --
  The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked
  latent pixel.
- **num_train_timesteps** (`int`, defaults to 100) --
  The number of diffusion steps to train the model.
- **alpha_cum_start** (`float`, defaults to 0.99999) --
  The starting cumulative alpha value.
- **alpha_cum_end** (`float`, defaults to 0.00009) --
  The ending cumulative alpha value.
- **gamma_cum_start** (`float`, defaults to 0.00009) --
  The starting cumulative gamma value.
- **gamma_cum_end** (`float`, defaults to 0.99999) --
  The ending cumulative gamma value.</paramsdesc><paramgroups>0</paramgroups></docstring>

A scheduler for vector quantized diffusion.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>log_Q_t_transitioning_to_known_class</name><anchor>diffusers.VQDiffusionScheduler.log_Q_t_transitioning_to_known_class</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L356</source><parameters>[{"name": "t", "val": ": torch.int32"}, {"name": "x_t", "val": ": LongTensor"}, {"name": "log_onehot_x_t", "val": ": Tensor"}, {"name": "cumulative", "val": ": bool"}]</parameters><paramsdesc>- **t** (`torch.Long`) --
  The timestep that determines which transition matrix is used.
- **x_t** (`torch.LongTensor` of shape `(batch size, num latent pixels)`) --
  The classes of each latent pixel at time `t`.
- **log_onehot_x_t** (`torch.Tensor` of shape `(batch size, num classes, num latent pixels)`) --
  The log one-hot vectors of `x_t`.
- **cumulative** (`bool`) --
  If cumulative is `False`, the single step transition matrix `t-1`->`t` is used. If cumulative is
  `True`, the cumulative transition matrix `0`->`t` is used.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`</rettype><retdesc>Each _column_ of the returned matrix is a _row_ of log probabilities of the complete probability
transition matrix.

When non cumulative, returns `self.num_classes - 1` rows because the initial latent pixel cannot be
masked.

Where:
- `q_n` is the probability distribution for the forward process of the `n`th latent pixel.
- C_0 is a class of a latent pixel embedding
- C_k is the class of the masked latent pixel

non-cumulative result (omitting logarithms):
```
q_0(x_t | x_{t-1} = C_0) ... q_n(x_t | x_{t-1} = C_0)
          .      .                     .
          .               .            .
          .                      .     .
q_0(x_t | x_{t-1} = C_k) ... q_n(x_t | x_{t-1} = C_k)
```

cumulative result (omitting logarithms):
```
q_0_cumulative(x_t | x_0 = C_0)    ...  q_n_cumulative(x_t | x_0 = C_0)
          .               .                          .
          .                        .                 .
          .                               .          .
q_0_cumulative(x_t | x_0 = C_{k-1}) ... q_n_cumulative(x_t | x_0 = C_{k-1})
```</retdesc></docstring>

Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each
latent pixel in `x_t`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>q_posterior</name><anchor>diffusers.VQDiffusionScheduler.q_posterior</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L245</source><parameters>[{"name": "log_p_x_0", "val": ""}, {"name": "x_t", "val": ""}, {"name": "t", "val": ""}]</parameters><paramsdesc>- **log_p_x_0** (`torch.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`) --
  The log probabilities for the predicted classes of the initial latent pixels. Does not include a
  prediction for the masked class as the initial unnoised image cannot be masked.
- **x_t** (`torch.LongTensor` of shape `(batch size, num latent pixels)`) --
  The classes of each latent pixel at time `t`.
- **t** (`torch.Long`) --
  The timestep that determines which transition matrix is used.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor` of shape `(batch size, num classes, num latent pixels)`</rettype><retdesc>The log probabilities for the predicted classes of the image at timestep `t-1`.</retdesc></docstring>

<ExampleCodeBlock anchor="diffusers.VQDiffusionScheduler.q_posterior.example">

Calculates the log probabilities for the predicted classes of the image at timestep `t-1`:

```
p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) )
```

</ExampleCodeBlock>








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.VQDiffusionScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L178</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved
  to.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.VQDiffusionScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L200</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": torch.int64"}, {"name": "sample", "val": ": LongTensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **log_p_x_0** -- (`torch.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`):
  The log probabilities for the predicted classes of the initial latent pixels. Does not include a
  prediction for the masked class as the initial unnoised image cannot be masked.
- **t** (`torch.long`) --
  The timestep that determines which transition matrices are used.
- **x_t** (`torch.LongTensor` of shape `(batch size, num latent pixels)`) --
  The classes of each latent pixel at time `t`.
- **generator** (`torch.Generator`, or `None`) --
  A random number generator for the noise applied to `p(x_{t-1} | x_t)` before it is sampled from.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [VQDiffusionSchedulerOutput](/docs/diffusers/main/en/api/schedulers/vq_diffusion#diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput) or
  `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[VQDiffusionSchedulerOutput](/docs/diffusers/main/en/api/schedulers/vq_diffusion#diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [VQDiffusionSchedulerOutput](/docs/diffusers/main/en/api/schedulers/vq_diffusion#diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput) is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by the reverse transition distribution. See
[q_posterior()](/docs/diffusers/main/en/api/schedulers/vq_diffusion#diffusers.VQDiffusionScheduler.q_posterior) for more details about how the distribution is computer.








</div></div>

## VQDiffusionSchedulerOutput[[diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_vq_diffusion.py#L28</source><parameters>[{"name": "prev_sample", "val": ": LongTensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.LongTensor` of shape `(batch size, num latent pixels)`) --
  Computed sample x_{t-1} of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's step function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/vq_diffusion.md" />

### CMStochasticIterativeScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/cm_stochastic_iterative.md

# CMStochasticIterativeScheduler

[Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps.

The abstract from the paper is:

*Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.*

The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models).

## CMStochasticIterativeScheduler[[diffusers.CMStochasticIterativeScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CMStochasticIterativeScheduler</name><anchor>diffusers.CMStochasticIterativeScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L44</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 40"}, {"name": "sigma_min", "val": ": float = 0.002"}, {"name": "sigma_max", "val": ": float = 80.0"}, {"name": "sigma_data", "val": ": float = 0.5"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "rho", "val": ": float = 7.0"}, {"name": "clip_denoised", "val": ": bool = True"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 40) --
  The number of diffusion steps to train the model.
- **sigma_min** (`float`, defaults to 0.002) --
  Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation.
- **sigma_max** (`float`, defaults to 80.0) --
  Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation.
- **sigma_data** (`float`, defaults to 0.5) --
  The standard deviation of the data distribution from the EDM
  [paper](https://huggingface.co/papers/2206.00364). Defaults to 0.5 from the original implementation.
- **s_noise** (`float`, defaults to 1.0) --
  The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000,
  1.011]. Defaults to 1.0 from the original implementation.
- **rho** (`float`, defaults to 7.0) --
  The parameter for calculating the Karras sigma schedule from the EDM
  [paper](https://huggingface.co/papers/2206.00364). Defaults to 7.0 from the original implementation.
- **clip_denoised** (`bool`, defaults to `True`) --
  Whether to clip the denoised outputs to `(-1, 1)`.
- **timesteps** (`List` or `np.ndarray` or `torch.Tensor`, *optional*) --
  An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in
  increasing order.</paramsdesc><paramgroups>0</paramgroups></docstring>

Multistep and onestep sampling for consistency models.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_scalings_for_boundary_condition</name><anchor>diffusers.CMStochasticIterativeScheduler.get_scalings_for_boundary_condition</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L266</source><parameters>[{"name": "sigma", "val": ""}]</parameters><paramsdesc>- **sigma** (`torch.Tensor`) --
  The current sigma in the Karras sigma schedule.</paramsdesc><paramgroups>0</paramgroups><rettype>`tuple`</rettype><retdesc>A two-element tuple where `c_skip` (which weights the current sample) is the first element and `c_out`
(which weights the consistency model output) is the second element.</retdesc></docstring>

Gets the scalings used in the consistency model parameterization (from Appendix C of the
[paper](https://huggingface.co/papers/2303.01469)) to enforce boundary condition.

> [!TIP] > `epsilon` in the equations for `c_skip` and `c_out` is set to `sigma_min`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.CMStochasticIterativeScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L129</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`float` or `torch.Tensor`) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Scales the consistency model input by `(sigma**2 + sigma_data**2) ** 0.5`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.CMStochasticIterativeScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L119</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.CMStochasticIterativeScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L173</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
  timestep spacing strategy of equal spacing between timesteps is used. If `timesteps` is passed,
  `num_inference_steps` must be `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>sigma_to_t</name><anchor>diffusers.CMStochasticIterativeScheduler.sigma_to_t</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L154</source><parameters>[{"name": "sigmas", "val": ": typing.Union[float, numpy.ndarray]"}]</parameters><paramsdesc>- **sigmas** (`float` or `np.ndarray`) --
  A single Karras sigma or an array of Karras sigmas.</paramsdesc><paramgroups>0</paramgroups><rettype>`float` or `np.ndarray`</rettype><retdesc>A scaled input timestep or scaled input timestep array.</retdesc></docstring>

Gets scaled timesteps from the Karras sigmas for input to the consistency model.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.CMStochasticIterativeScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L313</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`float`) --
  The current timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a
  [CMStochasticIterativeSchedulerOutput](/docs/diffusers/main/en/api/schedulers/cm_stochastic_iterative#diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[CMStochasticIterativeSchedulerOutput](/docs/diffusers/main/en/api/schedulers/cm_stochastic_iterative#diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`,
[CMStochasticIterativeSchedulerOutput](/docs/diffusers/main/en/api/schedulers/cm_stochastic_iterative#diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput) is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## CMStochasticIterativeSchedulerOutput[[diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_models.py#L31</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/cm_stochastic_iterative.md" />

### ScoreSdeVpScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/score_sde_vp.md

# ScoreSdeVpScheduler

`ScoreSdeVpScheduler` is a variance preserving stochastic differential equation (SDE) scheduler.  It was introduced in the [Score-Based Generative Modeling through Stochastic Differential Equations](https://huggingface.co/papers/2011.13456) paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole.

The abstract from the paper is:

*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*

> [!WARNING]
> 🚧 This scheduler is under construction!

## ScoreSdeVpScheduler[[diffusers.schedulers.ScoreSdeVpScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.ScoreSdeVpScheduler</name><anchor>diffusers.schedulers.ScoreSdeVpScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_sde_vp.py#L27</source><parameters>[{"name": "num_train_timesteps", "val": " = 2000"}, {"name": "beta_min", "val": " = 0.1"}, {"name": "beta_max", "val": " = 20"}, {"name": "sampling_eps", "val": " = 0.001"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 2000) --
  The number of diffusion steps to train the model.
- **beta_min** (`int`, defaults to 0.1) --
- **beta_max** (`int`, defaults to 20) --
- **sampling_eps** (`int`, defaults to 1e-3) --
  The end value of sampling where timesteps decrease progressively from 1 to epsilon.</paramsdesc><paramgroups>0</paramgroups></docstring>

`ScoreSdeVpScheduler` is a variance preserving stochastic differential equation (SDE) scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.schedulers.ScoreSdeVpScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_sde_vp.py#L51</source><parameters>[{"name": "num_inference_steps", "val": ""}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the continuous timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_pred</name><anchor>diffusers.schedulers.ScoreSdeVpScheduler.step_pred</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_sde_vp.py#L63</source><parameters>[{"name": "score", "val": ""}, {"name": "x", "val": ""}, {"name": "t", "val": ""}, {"name": "generator", "val": " = None"}]</parameters><paramsdesc>- **score** () --
- **x** () --
- **t** () --
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/score_sde_vp.md" />

### FlowMatchEulerDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/flow_match_euler_discrete.md

# FlowMatchEulerDiscreteScheduler

`FlowMatchEulerDiscreteScheduler` is based on the flow-matching sampling introduced in [Stable Diffusion 3](https://huggingface.co/papers/2403.03206).

## FlowMatchEulerDiscreteScheduler[[diffusers.FlowMatchEulerDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FlowMatchEulerDiscreteScheduler</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L47</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "shift", "val": ": float = 1.0"}, {"name": "use_dynamic_shifting", "val": ": bool = False"}, {"name": "base_shift", "val": ": typing.Optional[float] = 0.5"}, {"name": "max_shift", "val": ": typing.Optional[float] = 1.15"}, {"name": "base_image_seq_len", "val": ": typing.Optional[int] = 256"}, {"name": "max_image_seq_len", "val": ": typing.Optional[int] = 4096"}, {"name": "invert_sigmas", "val": ": bool = False"}, {"name": "shift_terminal", "val": ": typing.Optional[float] = None"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "time_shift_type", "val": ": str = 'exponential'"}, {"name": "stochastic_sampling", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **shift** (`float`, defaults to 1.0) --
  The shift value for the timestep schedule.
- **use_dynamic_shifting** (`bool`, defaults to False) --
  Whether to apply timestep shifting on-the-fly based on the image resolution.
- **base_shift** (`float`, defaults to 0.5) --
  Value to stabilize image generation. Increasing `base_shift` reduces variation and image is more consistent
  with desired output.
- **max_shift** (`float`, defaults to 1.15) --
  Value change allowed to latent vectors. Increasing `max_shift` encourages more variation and image may be
  more exaggerated or stylized.
- **base_image_seq_len** (`int`, defaults to 256) --
  The base image sequence length.
- **max_image_seq_len** (`int`, defaults to 4096) --
  The maximum image sequence length.
- **invert_sigmas** (`bool`, defaults to False) --
  Whether to invert the sigmas.
- **shift_terminal** (`float`, defaults to None) --
  The end value of the shifted timestep schedule.
- **use_karras_sigmas** (`bool`, defaults to False) --
  Whether to use Karras sigmas for step sizes in the noise schedule during sampling.
- **use_exponential_sigmas** (`bool`, defaults to False) --
  Whether to use exponential sigmas for step sizes in the noise schedule during sampling.
- **use_beta_sigmas** (`bool`, defaults to False) --
  Whether to use beta sigmas for step sizes in the noise schedule during sampling.
- **time_shift_type** (`str`, defaults to "exponential") --
  The type of dynamic resolution-dependent timestep shifting to apply. Either "exponential" or "linear".
- **stochastic_sampling** (`bool`, defaults to False) --
  Whether to use stochastic sampling.</paramsdesc><paramgroups>0</paramgroups></docstring>

Euler scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_noise</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler.scale_noise</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L171</source><parameters>[{"name": "sample", "val": ": FloatTensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.FloatTensor]"}, {"name": "noise", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.FloatTensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.FloatTensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Forward process in flow-matching








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L158</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L249</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}, {"name": "mu", "val": ": typing.Optional[float] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[float]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`, *optional*) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **sigmas** (`List[float]`, *optional*) --
  Custom values for sigmas to be used for each diffusion step. If `None`, the sigmas are computed
  automatically.
- **mu** (`float`, *optional*) --
  Determines the amount of shifting applied to sigmas when performing resolution-dependent timestep
  shifting.
- **timesteps** (`List[float]`, *optional*) --
  Custom values for timesteps to be used for each diffusion step. If `None`, the timesteps are computed
  automatically.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L373</source><parameters>[{"name": "model_output", "val": ": FloatTensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.FloatTensor]"}, {"name": "sample", "val": ": FloatTensor"}, {"name": "s_churn", "val": ": float = 0.0"}, {"name": "s_tmin", "val": ": float = 0.0"}, {"name": "s_tmax", "val": ": float = inf"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "per_token_timesteps", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.FloatTensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.FloatTensor`) --
  A current instance of a sample created by the diffusion process.
- **s_churn** (`float`) --
- **s_tmin**  (`float`) --
- **s_tmax**  (`float`) --
- **s_noise** (`float`, defaults to 1.0) --
  Scaling factor for noise added to the sample.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **per_token_timesteps** (`torch.Tensor`, *optional*) --
  The timesteps for each token in the sample.
- **return_dict** (`bool`) --
  Whether or not to return a
  `FlowMatchEulerDiscreteSchedulerOutput` or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`FlowMatchEulerDiscreteSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`,
`FlowMatchEulerDiscreteSchedulerOutput` is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>stretch_shift_to_terminal</name><anchor>diffusers.FlowMatchEulerDiscreteScheduler.stretch_shift_to_terminal</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L228</source><parameters>[{"name": "t", "val": ": Tensor"}]</parameters><paramsdesc>- **t** (`torch.Tensor`) --
  A tensor of timesteps to be stretched and shifted.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A tensor of adjusted timesteps such that the final value equals `self.config.shift_terminal`.</retdesc></docstring>

Stretches and shifts the timestep schedule to ensure it terminates at the configured `shift_terminal` config
value.

Reference:
https://github.com/Lightricks/LTX-Video/blob/a01a171f8fe3d99dce2728d60a73fecf4d4238ae/ltx_video/schedulers/rf.py#L51








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/flow_match_euler_discrete.md" />

### EulerDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/euler.md

# EulerDiscreteScheduler

The Euler scheduler (Algorithm 2) is from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by [Katherine Crowson](https://github.com/crowsonkb/).


## EulerDiscreteScheduler[[diffusers.EulerDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EulerDiscreteScheduler</name><anchor>diffusers.EulerDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L135</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "interpolation_type", "val": ": str = 'linear'"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "sigma_min", "val": ": typing.Optional[float] = None"}, {"name": "sigma_max", "val": ": typing.Optional[float] = None"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "timestep_type", "val": ": str = 'discrete'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "final_sigmas_type", "val": ": str = 'zero'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **interpolation_type(`str`,** defaults to `"linear"`, *optional*) --
  The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of
  `"linear"` or `"log_linear"`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.</paramsdesc><paramgroups>0</paramgroups></docstring>

Euler scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.EulerDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L295</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.EulerDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L285</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.EulerDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L319</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary timesteps schedule. If `None`, timesteps will be generated
  based on the `timestep_spacing` attribute. If `timesteps` is passed, `num_inference_steps` and `sigmas`
  must be `None`, and `timestep_spacing` attribute will be ignored.
- **sigmas** (`List[float]`, *optional*) --
  Custom sigmas used to support arbitrary timesteps schedule schedule. If `None`, timesteps and sigmas
  will be generated based on the relevant scheduler attributes. If `sigmas` is passed,
  `num_inference_steps` and `timesteps` must be `None`, and the timesteps will be generated based on the
  custom sigmas schedule.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.EulerDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L576</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "s_churn", "val": ": float = 0.0"}, {"name": "s_tmin", "val": ": float = 0.0"}, {"name": "s_tmax", "val": ": float = inf"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **s_churn** (`float`) --
- **s_tmin**  (`float`) --
- **s_tmax**  (`float`) --
- **s_noise** (`float`, defaults to 1.0) --
  Scaling factor for noise added to the sample.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a [EulerDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) or
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[EulerDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [EulerDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## EulerDiscreteSchedulerOutput[[diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_discrete.py#L36</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/euler.md" />

### DPMSolverMultistepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver.md

# DPMSolverMultistepScheduler

`DPMSolverMultistepScheduler` is a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.

DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
samples, and it can generate quite good samples even in 10 steps.

## Tips

It is recommended to set `solver_order` to 2 for guide sampling, and `solver_order=3` for unconditional sampling.

Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space
diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic
thresholding. This thresholding method is unsuitable for latent-space diffusion models such as
Stable Diffusion.

The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order `sde-dpmsolver++`.

## DPMSolverMultistepScheduler[[diffusers.DPMSolverMultistepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DPMSolverMultistepScheduler</name><anchor>diffusers.DPMSolverMultistepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L115</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "algorithm_type", "val": ": str = 'dpmsolver++'"}, {"name": "solver_type", "val": ": str = 'midpoint'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "euler_at_final", "val": ": bool = False"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_lu_lambdas", "val": ": typing.Optional[bool] = False"}, {"name": "use_flow_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "flow_shift", "val": ": typing.Optional[float] = 1.0"}, {"name": "final_sigmas_type", "val": ": typing.Optional[str] = 'zero'"}, {"name": "lambda_min_clipped", "val": ": float = -inf"}, {"name": "variance_type", "val": ": typing.Optional[str] = None"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "use_dynamic_shifting", "val": ": bool = False"}, {"name": "time_shift_type", "val": ": str = 'exponential'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **solver_order** (`int`, defaults to 2) --
  The DPMSolver order which can be `1` or `2` or `3`. It is recommended to use `solver_order=2` for guided
  sampling, and `solver_order=3` for unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample), `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper), or `flow_prediction`.
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True` and
  `algorithm_type="dpmsolver++"`.
- **algorithm_type** (`str`, defaults to `dpmsolver++`) --
  Algorithm type for the solver; can be `dpmsolver`, `dpmsolver++`, `sde-dpmsolver` or `sde-dpmsolver++`. The
  `dpmsolver` type implements the algorithms in the [DPMSolver](https://huggingface.co/papers/2206.00927)
  paper, and the `dpmsolver++` type implements the algorithms in the
  [DPMSolver++](https://huggingface.co/papers/2211.01095) paper. It is recommended to use `dpmsolver++` or
  `sde-dpmsolver++` with `solver_order=2` for guided sampling like in Stable Diffusion.
- **solver_type** (`str`, defaults to `midpoint`) --
  Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the
  sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **euler_at_final** (`bool`, defaults to `False`) --
  Whether to use Euler's method in the final step. It is a trade-off between numerical stability and detail
  richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
  steps, but sometimes may result in blurring.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **use_lu_lambdas** (`bool`, *optional*, defaults to `False`) --
  Whether to use the uniform-logSNR for step sizes proposed by Lu's DPM-Solver in the noise schedule during
  the sampling process. If `True`, the sigmas and time steps are determined according to a sequence of
  `lambda(t)`.
- **use_flow_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use flow sigmas for step sizes in the noise schedule during the sampling process.
- **flow_shift** (`float`, *optional*, defaults to 1.0) --
  The shift value for the timestep schedule for flow matching.
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.
- **lambda_min_clipped** (`float`, defaults to `-inf`) --
  Clipping threshold for the minimum value of `lambda(t)` for numerical stability. This is critical for the
  cosine (`squaredcos_cap_v2`) noise schedule.
- **variance_type** (`str`, *optional*) --
  Set to "learned" or "learned_range" for diffusion models that predict variance. If set, the model's output
  contains the predicted Gaussian variance.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DPMSolverMultistepScheduler` is a fast dedicated high-order solver for diffusion ODEs.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.DPMSolverMultistepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L621</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model.

> [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dpm_solver_first_order_update</name><anchor>diffusers.DPMSolverMultistepScheduler.dpm_solver_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L719</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DPMSolver (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_second_order_update</name><anchor>diffusers.DPMSolverMultistepScheduler.multistep_dpm_solver_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L788</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_third_order_update</name><anchor>diffusers.DPMSolverMultistepScheduler.multistep_dpm_solver_third_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L911</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the third-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DPMSolverMultistepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L1126</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.DPMSolverMultistepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L321</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DPMSolverMultistepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L331</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "mu", "val": ": typing.Optional[float] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary timesteps schedule. If `None`, timesteps will be generated
  based on the `timestep_spacing` attribute. If `timesteps` is passed, `num_inference_steps` and `sigmas`
  must be `None`, and `timestep_spacing` attribute will be ignored.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DPMSolverMultistepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L1037</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "variance_noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `LEdits++`.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep DPMSolver.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/multistep_dpm_solver.md" />

### HeunDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/heun.md

# HeunDiscreteScheduler

The Heun scheduler (Algorithm 1) is from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper by Karras et al. The scheduler is ported from the [k-diffusion](https://github.com/crowsonkb/k-diffusion) library and created by [Katherine Crowson](https://github.com/crowsonkb/).

## HeunDiscreteScheduler[[diffusers.HeunDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.HeunDiscreteScheduler</name><anchor>diffusers.HeunDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_heun_discrete.py#L95</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "clip_sample", "val": ": typing.Optional[bool] = False"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

Scheduler with Heun steps for discrete beta schedules.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.HeunDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_heun_discrete.py#L237</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.HeunDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_heun_discrete.py#L227</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.HeunDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_heun_discrete.py#L263</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "num_train_timesteps", "val": ": typing.Optional[int] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **num_train_timesteps** (`int`, *optional*) --
  The number of diffusion steps used when training the model. If `None`, the default
  `num_train_timesteps` attribute is used.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, timesteps will be
  generated based on the `timestep_spacing` attribute. If `timesteps` is passed, `num_inference_steps`
  must be `None`, and `timestep_spacing` attribute will be ignored.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.HeunDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_heun_discrete.py#L472</source><parameters>[{"name": "model_output", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a `HeunDiscreteSchedulerOutput` or
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`HeunDiscreteSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `HeunDiscreteSchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/heun.md" />

### EulerAncestralDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/euler_ancestral.md

# EulerAncestralDiscreteScheduler

A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) implementation by [Katherine Crowson](https://github.com/crowsonkb/).

## EulerAncestralDiscreteScheduler[[diffusers.EulerAncestralDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EulerAncestralDiscreteScheduler</name><anchor>diffusers.EulerAncestralDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L132</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

Ancestral sampling with Euler method steps.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.EulerAncestralDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L253</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.EulerAncestralDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L243</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.EulerAncestralDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L277</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.EulerAncestralDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L345</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a
  [EulerAncestralDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput) or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[EulerAncestralDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`,
[EulerAncestralDiscreteSchedulerOutput](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput) is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## EulerAncestralDiscreteSchedulerOutput[[diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py#L33</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/euler_ancestral.md" />

### CogVideoXDPMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/multistep_dpm_solver_cogvideox.md

# CogVideoXDPMScheduler

`CogVideoXDPMScheduler` is based on [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095), specifically for CogVideoX models.

## CogVideoXDPMScheduler[[diffusers.CogVideoXDPMScheduler]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXDPMScheduler</name><anchor>diffusers.CogVideoXDPMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpm_cogvideox.py#L127</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'scaled_linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "clip_sample", "val": ": bool = True"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "snr_shift_scale", "val": ": float = 3.0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DDIMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
non-Markovian guidance.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.CogVideoXDPMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpm_cogvideox.py#L244</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.CogVideoXDPMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpm_cogvideox.py#L261</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.CogVideoXDPMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpm_cogvideox.py#L330</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "old_pred_original_sample", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "timestep_back", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "use_clipped_model_output", "val": ": bool = False"}, {"name": "generator", "val": " = None"}, {"name": "variance_noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = False"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **eta** (`float`) --
  The weight of noise for added noise in diffusion step.
- **use_clipped_model_output** (`bool`, defaults to `False`) --
  If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary
  because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no
  clipping has happened, "corrected" `model_output` would coincide with the one provided as input and
  `use_clipped_model_output` has no effect.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `CycleDiffusion`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/multistep_dpm_solver_cogvideox.md" />

### EDMEulerScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/edm_euler.md

# EDMEulerScheduler

The Karras formulation of the Euler scheduler (Algorithm 2) from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by [Katherine Crowson](https://github.com/crowsonkb/).


## EDMEulerScheduler[[diffusers.EDMEulerScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EDMEulerScheduler</name><anchor>diffusers.EDMEulerScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L49</source><parameters>[{"name": "sigma_min", "val": ": float = 0.002"}, {"name": "sigma_max", "val": ": float = 80.0"}, {"name": "sigma_data", "val": ": float = 0.5"}, {"name": "sigma_schedule", "val": ": str = 'karras'"}, {"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "rho", "val": ": float = 7.0"}, {"name": "final_sigmas_type", "val": ": str = 'zero'"}]</parameters><paramsdesc>- **sigma_min** (`float`, *optional*, defaults to 0.002) --
  Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable
  range is [0, 10].
- **sigma_max** (`float`, *optional*, defaults to 80.0) --
  Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable
  range is [0.2, 80.0].
- **sigma_data** (`float`, *optional*, defaults to 0.5) --
  The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1].
- **sigma_schedule** (`str`, *optional*, defaults to `karras`) --
  Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper
  (https://huggingface.co/papers/2206.00364). Other acceptable value is "exponential". The exponential
  schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl.
- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **rho** (`float`, *optional*, defaults to 7.0) --
  The rho parameter used for calculating the Karras sigma schedule, which is set to 7.0 in the EDM paper [1].
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.</paramsdesc><paramgroups>0</paramgroups></docstring>

Implements the Euler scheduler in EDM formulation as presented in Karras et al. 2022 [1].

[1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models."
https://huggingface.co/papers/2206.00364

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.EDMEulerScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L191</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.EDMEulerScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L153</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.EDMEulerScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L215</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "sigmas", "val": ": typing.Union[torch.Tensor, typing.List[float], NoneType] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **sigmas** (`Union[torch.Tensor, List[float]]`, *optional*) --
  Custom sigmas to use for the denoising process. If not defined, the default behavior when
  `num_inference_steps` is passed will be used.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.EDMEulerScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L310</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "s_churn", "val": ": float = 0.0"}, {"name": "s_tmin", "val": ": float = 0.0"}, {"name": "s_tmax", "val": ": float = inf"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **s_churn** (`float`) --
- **s_tmin**  (`float`) --
- **s_tmax**  (`float`) --
- **s_noise** (`float`, defaults to 1.0) --
  Scaling factor for noise added to the sample.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a `~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput` or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `~schedulers.scheduling_euler_discrete.EDMEulerSchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## EDMEulerSchedulerOutput[[diffusers.schedulers.scheduling_edm_euler.EDMEulerSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_edm_euler.EDMEulerSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_edm_euler.EDMEulerSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_euler.py#L32</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/edm_euler.md" />

### EDMDPMSolverMultistepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/edm_multistep_dpm_solver.md

# EDMDPMSolverMultistepScheduler

`EDMDPMSolverMultistepScheduler` is a [Karras formulation](https://huggingface.co/papers/2206.00364) of `DPMSolverMultistepScheduler`, a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.

DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
samples, and it can generate quite good samples even in 10 steps.

## EDMDPMSolverMultistepScheduler[[diffusers.EDMDPMSolverMultistepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.EDMDPMSolverMultistepScheduler</name><anchor>diffusers.EDMDPMSolverMultistepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L28</source><parameters>[{"name": "sigma_min", "val": ": float = 0.002"}, {"name": "sigma_max", "val": ": float = 80.0"}, {"name": "sigma_data", "val": ": float = 0.5"}, {"name": "sigma_schedule", "val": ": str = 'karras'"}, {"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "rho", "val": ": float = 7.0"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "algorithm_type", "val": ": str = 'dpmsolver++'"}, {"name": "solver_type", "val": ": str = 'midpoint'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "euler_at_final", "val": ": bool = False"}, {"name": "final_sigmas_type", "val": ": typing.Optional[str] = 'zero'"}]</parameters><paramsdesc>- **sigma_min** (`float`, *optional*, defaults to 0.002) --
  Minimum noise magnitude in the sigma schedule. This was set to 0.002 in the EDM paper [1]; a reasonable
  range is [0, 10].
- **sigma_max** (`float`, *optional*, defaults to 80.0) --
  Maximum noise magnitude in the sigma schedule. This was set to 80.0 in the EDM paper [1]; a reasonable
  range is [0.2, 80.0].
- **sigma_data** (`float`, *optional*, defaults to 0.5) --
  The standard deviation of the data distribution. This is set to 0.5 in the EDM paper [1].
- **sigma_schedule** (`str`, *optional*, defaults to `karras`) --
  Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper
  (https://huggingface.co/papers/2206.00364). Other acceptable value is "exponential". The exponential
  schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl.
- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **solver_order** (`int`, defaults to 2) --
  The DPMSolver order which can be `1` or `2` or `3`. It is recommended to use `solver_order=2` for guided
  sampling, and `solver_order=3` for unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True` and
  `algorithm_type="dpmsolver++"`.
- **algorithm_type** (`str`, defaults to `dpmsolver++`) --
  Algorithm type for the solver; can be `dpmsolver++` or `sde-dpmsolver++`. The `dpmsolver++` type implements
  the algorithms in the [DPMSolver++](https://huggingface.co/papers/2211.01095) paper. It is recommended to
  use `dpmsolver++` or `sde-dpmsolver++` with `solver_order=2` for guided sampling like in Stable Diffusion.
- **solver_type** (`str`, defaults to `midpoint`) --
  Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the
  sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **euler_at_final** (`bool`, defaults to `False`) --
  Whether to use Euler's method in the final step. It is a trade-off between numerical stability and detail
  richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
  steps, but sometimes may result in blurring.
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.</paramsdesc><paramgroups>0</paramgroups></docstring>

Implements DPMSolverMultistepScheduler in EDM formulation as presented in Karras et al. 2022 [1].
`EDMDPMSolverMultistepScheduler` is a fast dedicated high-order solver for diffusion ODEs.

[1] Karras, Tero, et al. "Elucidating the Design Space of Diffusion-Based Generative Models."
https://huggingface.co/papers/2206.00364

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L363</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sample", "val": ": Tensor = None"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model.

> [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dpm_solver_first_order_update</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.dpm_solver_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L394</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DPMSolver (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_second_order_update</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.multistep_dpm_solver_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L432</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_third_order_update</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.multistep_dpm_solver_third_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L503</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "sample", "val": ": Tensor = None"}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the third-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L209</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L167</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L233</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.EDMDPMSolverMultistepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_edm_dpmsolver_multistep.py#L590</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep DPMSolver.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/edm_multistep_dpm_solver.md" />

### CosineDPMSolverMultistepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/cosine_dpm.md

# CosineDPMSolverMultistepScheduler

The [CosineDPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/cosine_dpm#diffusers.CosineDPMSolverMultistepScheduler) is a variant of [DPMSolverMultistepScheduler](/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) with cosine schedule, proposed by Nichol and Dhariwal (2021).
It is being used in the [Stable Audio Open](https://huggingface.co/papers/2407.14358) paper and the [Stability-AI/stable-audio-tool](https://github.com/Stability-AI/stable-audio-tools) codebase.

This scheduler was contributed by [Yoach Lacombe](https://huggingface.co/ylacombe).

## CosineDPMSolverMultistepScheduler[[diffusers.CosineDPMSolverMultistepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CosineDPMSolverMultistepScheduler</name><anchor>diffusers.CosineDPMSolverMultistepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L28</source><parameters>[{"name": "sigma_min", "val": ": float = 0.3"}, {"name": "sigma_max", "val": ": float = 500"}, {"name": "sigma_data", "val": ": float = 1.0"}, {"name": "sigma_schedule", "val": ": str = 'exponential'"}, {"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'v_prediction'"}, {"name": "rho", "val": ": float = 7.0"}, {"name": "solver_type", "val": ": str = 'midpoint'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "euler_at_final", "val": ": bool = False"}, {"name": "final_sigmas_type", "val": ": typing.Optional[str] = 'zero'"}]</parameters><paramsdesc>- **sigma_min** (`float`, *optional*, defaults to 0.3) --
  Minimum noise magnitude in the sigma schedule. This was set to 0.3 in Stable Audio Open [1].
- **sigma_max** (`float`, *optional*, defaults to 500) --
  Maximum noise magnitude in the sigma schedule. This was set to 500 in Stable Audio Open [1].
- **sigma_data** (`float`, *optional*, defaults to 1.0) --
  The standard deviation of the data distribution. This is set to 1.0 in Stable Audio Open [1].
- **sigma_schedule** (`str`, *optional*, defaults to `exponential`) --
  Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper
  (https://huggingface.co/papers/2206.00364). Other acceptable value is "exponential". The exponential
  schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl.
- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **solver_order** (`int`, defaults to 2) --
  The DPMSolver order which can be `1` or `2`. It is recommended to use `solver_order=2`.
- **prediction_type** (`str`, defaults to `v_prediction`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **solver_type** (`str`, defaults to `midpoint`) --
  Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the
  sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **euler_at_final** (`bool`, defaults to `False`) --
  Whether to use Euler's method in the final step. It is a trade-off between numerical stability and detail
  richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference
  steps, but sometimes may result in blurring.
- **final_sigmas_type** (`str`, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.</paramsdesc><paramgroups>0</paramgroups></docstring>

Implements a variant of `DPMSolverMultistepScheduler` with cosine schedule, proposed by Nichol and Dhariwal (2021).
This scheduler was used in Stable Audio Open [1].

[1] Evans, Parker, et al. "Stable Audio Open" https://huggingface.co/papers/2407.14358

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L297</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sample", "val": ": Tensor = None"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model.

> [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dpm_solver_first_order_update</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.dpm_solver_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L325</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DPMSolver (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_dpm_solver_second_order_update</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.multistep_dpm_solver_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L360</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order multistep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L174</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L135</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L198</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.CosineDPMSolverMultistepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py#L451</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep DPMSolver.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/cosine_dpm.md" />

### KDPM2AncestralDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_discrete_ancestral.md

# KDPM2AncestralDiscreteScheduler

The `KDPM2DiscreteScheduler` with ancestral sampling is inspired by the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper, and the scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/).

The original codebase can be found at [crowsonkb/k-diffusion](https://github.com/crowsonkb/k-diffusion).

## KDPM2AncestralDiscreteScheduler[[diffusers.KDPM2AncestralDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KDPM2AncestralDiscreteScheduler</name><anchor>diffusers.KDPM2AncestralDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py#L96</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.00085) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.012) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the [Elucidating
the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.KDPM2AncestralDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py#L214</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.KDPM2AncestralDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py#L204</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.KDPM2AncestralDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py#L244</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "num_train_timesteps", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.KDPM2AncestralDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py#L475</source><parameters>[{"name": "model_output", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a
  `KDPM2AncestralDiscreteSchedulerOutput` or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`KDPM2AncestralDiscreteSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`,
`KDPM2AncestralDiscreteSchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/dpm_discrete_ancestral.md" />

### DEISMultistepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/deis.md

# DEISMultistepScheduler

Diffusion Exponential Integrator Sampler (DEIS) is proposed in [Fast Sampling of Diffusion Models with Exponential Integrator](https://huggingface.co/papers/2204.13902) by Qinsheng Zhang and Yongxin Chen. `DEISMultistepScheduler` is a fast high order solver for diffusion ordinary differential equations (ODEs).

This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear `t` space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver.

The abstract from the paper is:

*The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at [this https URL](https://github.com/qsh-zh/deis).*

## Tips

It is recommended to set `solver_order` to 2 or 3, while `solver_order=1` is equivalent to [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler).

Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space
diffusion models, you can set `thresholding=True` to use the dynamic thresholding.

## DEISMultistepScheduler[[diffusers.DEISMultistepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DEISMultistepScheduler</name><anchor>diffusers.DEISMultistepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L78</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Optional[numpy.ndarray] = None"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "algorithm_type", "val": ": str = 'deis'"}, {"name": "solver_type", "val": ": str = 'logrho'"}, {"name": "lower_order_final", "val": ": bool = True"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_flow_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "flow_shift", "val": ": typing.Optional[float] = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "use_dynamic_shifting", "val": ": bool = False"}, {"name": "time_shift_type", "val": ": str = 'exponential'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **solver_order** (`int`, defaults to 2) --
  The DEIS order which can be `1` or `2` or `3`. It is recommended to use `solver_order=2` for guided
  sampling, and `solver_order=3` for unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **algorithm_type** (`str`, defaults to `deis`) --
  The algorithm type for the solver.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

`DEISMultistepScheduler` is a fast high order solver for diffusion ordinary differential equations (ODEs).

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.DEISMultistepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L469</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DEIS algorithm needs.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>deis_first_order_update</name><anchor>diffusers.DEISMultistepScheduler.deis_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L529</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DEIS (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_deis_second_order_update</name><anchor>diffusers.DEISMultistepScheduler.multistep_deis_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L587</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order multistep DEIS.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>multistep_deis_third_order_update</name><anchor>diffusers.DEISMultistepScheduler.multistep_deis_third_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L656</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the third-order multistep DEIS.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DEISMultistepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L842</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.DEISMultistepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L227</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DEISMultistepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L237</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "mu", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DEISMultistepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_deis_multistep.py#L777</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the multistep DEIS.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/deis.md" />

### KDPM2DiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_discrete.md

# KDPM2DiscreteScheduler

The `KDPM2DiscreteScheduler` is inspired by the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper, and the scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/).

The original codebase can be found at [crowsonkb/k-diffusion](https://github.com/crowsonkb/k-diffusion).

## KDPM2DiscreteScheduler[[diffusers.KDPM2DiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KDPM2DiscreteScheduler</name><anchor>diffusers.KDPM2DiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py#L95</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.00085) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.012) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the [Elucidating the Design Space of
Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.KDPM2DiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py#L214</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.KDPM2DiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py#L204</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.KDPM2DiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py#L244</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "num_train_timesteps", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.KDPM2DiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_k_dpm_2_discrete.py#L460</source><parameters>[{"name": "model_output", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a `KDPM2DiscreteSchedulerOutput` or
  tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`KDPM2DiscreteSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `KDPM2DiscreteSchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/dpm_discrete.md" />

### FlowMatchHeunDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/flow_match_heun_discrete.md

# FlowMatchHeunDiscreteScheduler

`FlowMatchHeunDiscreteScheduler` is based on the flow-matching sampling introduced in [EDM](https://huggingface.co/papers/2403.03206).

## FlowMatchHeunDiscreteScheduler[[diffusers.FlowMatchHeunDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.FlowMatchHeunDiscreteScheduler</name><anchor>diffusers.FlowMatchHeunDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_heun_discrete.py#L44</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "shift", "val": ": float = 1.0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **shift** (`float`, defaults to 1.0) --
  The shift value for the timestep schedule.</paramsdesc><paramgroups>0</paramgroups></docstring>

Heun scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_noise</name><anchor>diffusers.FlowMatchHeunDiscreteScheduler.scale_noise</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_heun_discrete.py#L110</source><parameters>[{"name": "sample", "val": ": FloatTensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.FloatTensor]"}, {"name": "noise", "val": ": typing.Optional[torch.FloatTensor] = None"}]</parameters><paramsdesc>- **sample** (`torch.FloatTensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.FloatTensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Forward process in flow-matching








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.FlowMatchHeunDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_heun_discrete.py#L100</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.FlowMatchHeunDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_heun_discrete.py#L140</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.FlowMatchHeunDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_heun_discrete.py#L200</source><parameters>[{"name": "model_output", "val": ": FloatTensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.FloatTensor]"}, {"name": "sample", "val": ": FloatTensor"}, {"name": "s_churn", "val": ": float = 0.0"}, {"name": "s_tmin", "val": ": float = 0.0"}, {"name": "s_tmax", "val": ": float = inf"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.FloatTensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.FloatTensor`) --
  A current instance of a sample created by the diffusion process.
- **s_churn** (`float`) --
- **s_tmin**  (`float`) --
- **s_tmax**  (`float`) --
- **s_noise** (`float`, defaults to 1.0) --
  Scaling factor for noise added to the sample.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`) --
  Whether or not to return a
  `FlowMatchHeunDiscreteSchedulerOutput` tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>`FlowMatchHeunDiscreteSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`,
`FlowMatchHeunDiscreteSchedulerOutput` is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/flow_match_heun_discrete.md" />

### LMSDiscreteScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/lms_discrete.md

# LMSDiscreteScheduler

`LMSDiscreteScheduler` is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/), and the original implementation can be found at [crowsonkb/k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181).

## LMSDiscreteScheduler[[diffusers.LMSDiscreteScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.LMSDiscreteScheduler</name><anchor>diffusers.LMSDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L93</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

A linear multistep scheduler for discrete beta schedules.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_lms_coefficient</name><anchor>diffusers.LMSDiscreteScheduler.get_lms_coefficient</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L241</source><parameters>[{"name": "order", "val": ""}, {"name": "t", "val": ""}, {"name": "current_order", "val": ""}]</parameters><paramsdesc>- **order** () --
- **t** () --
- **current_order** () --</paramsdesc><paramgroups>0</paramgroups></docstring>

Compute the linear multistep coefficient.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.LMSDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L217</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`float` or `torch.Tensor`) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.LMSDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L207</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.LMSDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L263</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.LMSDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L437</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "order", "val": ": int = 4"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float` or `torch.Tensor`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **order** (`int`, defaults to 4) --
  The order of the linear multistep method.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## LMSDiscreteSchedulerOutput[[diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lms_discrete.py#L31</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/lms_discrete.md" />

### KarrasVeScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/stochastic_karras_ve.md

# KarrasVeScheduler

`KarrasVeScheduler` is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) and [Score-based generative modeling through stochastic differential equations](https://huggingface.co/papers/2011.13456) papers.

## KarrasVeScheduler[[diffusers.KarrasVeScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.KarrasVeScheduler</name><anchor>diffusers.KarrasVeScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L49</source><parameters>[{"name": "sigma_min", "val": ": float = 0.02"}, {"name": "sigma_max", "val": ": float = 100"}, {"name": "s_noise", "val": ": float = 1.007"}, {"name": "s_churn", "val": ": float = 80"}, {"name": "s_min", "val": ": float = 0.05"}, {"name": "s_max", "val": ": float = 50"}]</parameters><paramsdesc>- **sigma_min** (`float`, defaults to 0.02) --
  The minimum noise magnitude.
- **sigma_max** (`float`, defaults to 100) --
  The maximum noise magnitude.
- **s_noise** (`float`, defaults to 1.007) --
  The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000,
  1.011].
- **s_churn** (`float`, defaults to 80) --
  The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100].
- **s_min** (`float`, defaults to 0.05) --
  The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10].
- **s_max** (`float`, defaults to 50) --
  The end value of the sigma range to add noise. A reasonable range is [0.2, 80].</paramsdesc><paramgroups>0</paramgroups></docstring>

A stochastic scheduler tailored to variance-expanding models.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.

> [!TIP] > For more details on the parameters, see [Appendix E](https://huggingface.co/papers/2206.00364). The grid
search > values used to find the optimal `{s_noise, s_churn, s_min, s_max}` for a specific model are described in
Table 5 of > the paper.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_noise_to_input</name><anchor>diffusers.KarrasVeScheduler.add_noise_to_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L135</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "sigma", "val": ": float"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **sigma** (`float`) --
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.</paramsdesc><paramgroups>0</paramgroups></docstring>

Explicit Langevin-like "churn" step of adding noise to the sample according to a `gamma_i ≥ 0` to reach a
higher noise level `sigma_hat = sigma_i + gamma_i*sigma_i`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.KarrasVeScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L96</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.KarrasVeScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L113</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.KarrasVeScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L161</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sigma_hat", "val": ": float"}, {"name": "sigma_prev", "val": ": float"}, {"name": "sample_hat", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **sigma_hat** (`float`) --
- **sigma_prev** (`float`) --
- **sample_hat** (`torch.Tensor`) --
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput` or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput` is returned,
otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_correct</name><anchor>diffusers.KarrasVeScheduler.step_correct</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L200</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sigma_hat", "val": ": float"}, {"name": "sigma_prev", "val": ": float"}, {"name": "sample_hat", "val": ": Tensor"}, {"name": "sample_prev", "val": ": Tensor"}, {"name": "derivative", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **sigma_hat** (`float`) -- TODO
- **sigma_prev** (`float`) -- TODO
- **sample_hat** (`torch.Tensor`) -- TODO
- **sample_prev** (`torch.Tensor`) -- TODO
- **derivative** (`torch.Tensor`) -- TODO
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [DDPMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>prev_sample (TODO)</rettype><retdesc>updated sample in the diffusion chain. derivative (TODO): TODO</retdesc></docstring>

Corrects the predicted sample based on the `model_output` of the network.








</div></div>

## KarrasVeOutput[[diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput</name><anchor>diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/deprecated/scheduling_karras_ve.py#L29</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "derivative", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **derivative** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Derivative of predicted original image sample (x_0).
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample (x_{0}) based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's step function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/stochastic_karras_ve.md" />

### ScoreSdeVeScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/score_sde_ve.md

# ScoreSdeVeScheduler

`ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the [Score-Based Generative Modeling through Stochastic Differential Equations](https://huggingface.co/papers/2011.13456) paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole.

The abstract from the paper is:

*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*

## ScoreSdeVeScheduler[[diffusers.ScoreSdeVeScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.ScoreSdeVeScheduler</name><anchor>diffusers.ScoreSdeVeScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L46</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 2000"}, {"name": "snr", "val": ": float = 0.15"}, {"name": "sigma_min", "val": ": float = 0.01"}, {"name": "sigma_max", "val": ": float = 1348.0"}, {"name": "sampling_eps", "val": ": float = 1e-05"}, {"name": "correct_steps", "val": ": int = 1"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **snr** (`float`, defaults to 0.15) --
  A coefficient weighting the step from the `model_output` sample (from the network) to the random noise.
- **sigma_min** (`float`, defaults to 0.01) --
  The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror
  the distribution of the data.
- **sigma_max** (`float`, defaults to 1348.0) --
  The maximum value used for the range of continuous timesteps passed into the model.
- **sampling_eps** (`float`, defaults to 1e-5) --
  The end value of sampling where timesteps decrease progressively from 1 to epsilon.
- **correct_steps** (`int`, defaults to 1) --
  The number of correction steps performed on a produced sample.</paramsdesc><paramgroups>0</paramgroups></docstring>

`ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.ScoreSdeVeScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L89</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_sigmas</name><anchor>diffusers.ScoreSdeVeScheduler.set_sigmas</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L125</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "sigma_min", "val": ": float = None"}, {"name": "sigma_max", "val": ": float = None"}, {"name": "sampling_eps", "val": ": float = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **sigma_min** (`float`, optional) --
  The initial noise scale value (overrides value given during scheduler instantiation).
- **sigma_max** (`float`, optional) --
  The final noise scale value (overrides value given during scheduler instantiation).
- **sampling_eps** (`float`, optional) --
  The final timestep value (overrides value given during scheduler instantiation).</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight
of the `drift` and `diffusion` components of the sample update.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.ScoreSdeVeScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L106</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "sampling_eps", "val": ": float = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **sampling_eps** (`float`, *optional*) --
  The final timestep value (overrides value given during scheduler instantiation).
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the continuous timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_correct</name><anchor>diffusers.ScoreSdeVeScheduler.step_correct</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L228</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) is returned, otherwise a tuple
is returned where the first element is the sample tensor.</retdesc></docstring>

Correct the predicted sample based on the `model_output` of the network. This is often run repeatedly after
making the prediction for the previous timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step_pred</name><anchor>diffusers.ScoreSdeVeScheduler.step_pred</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L160</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SdeVeOutput](/docs/diffusers/main/en/api/schedulers/score_sde_ve#diffusers.schedulers.scheduling_sde_ve.SdeVeOutput) is returned, otherwise a tuple
is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## SdeVeOutput[[diffusers.schedulers.scheduling_sde_ve.SdeVeOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput</name><anchor>diffusers.schedulers.scheduling_sde_ve.SdeVeOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py#L30</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "prev_sample_mean", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **prev_sample_mean** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Mean averaged `prev_sample` over previous timesteps.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/score_sde_ve.md" />

### CogVideoXDDIMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/ddim_cogvideox.md

# CogVideoXDDIMScheduler

`CogVideoXDDIMScheduler` is based on [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502), specifically for CogVideoX models.

## CogVideoXDDIMScheduler[[diffusers.CogVideoXDDIMScheduler]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.CogVideoXDDIMScheduler</name><anchor>diffusers.CogVideoXDDIMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L126</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'scaled_linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "clip_sample", "val": ": bool = True"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "snr_shift_scale", "val": ": float = 3.0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DDIMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
non-Markovian guidance.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.CogVideoXDDIMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L243</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.CogVideoXDDIMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L260</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.CogVideoXDDIMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L305</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "use_clipped_model_output", "val": ": bool = False"}, {"name": "generator", "val": " = None"}, {"name": "variance_noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **eta** (`float`) --
  The weight of noise for added noise in diffusion step.
- **use_clipped_model_output** (`bool`, defaults to `False`) --
  If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary
  because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no
  clipping has happened, "corrected" `model_output` would coincide with the one provided as input and
  `use_clipped_model_output` has no effect.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `CycleDiffusion`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [DDIMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddim_cogvideox.md" />

### DDIMInverseScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/ddim_inverse.md

# DDIMInverseScheduler

`DDIMInverseScheduler` is the inverted scheduler from [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The implementation is mostly based on the DDIM inversion definition from [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://huggingface.co/papers/2211.09794).

## DDIMInverseScheduler[[diffusers.DDIMInverseScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DDIMInverseScheduler</name><anchor>diffusers.DDIMInverseScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_inverse.py#L130</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "clip_sample", "val": ": bool = True"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to 0, otherwise
  it uses the alpha value at step `num_train_timesteps - 1`.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DDIMInverseScheduler` is the reverse scheduler of [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler).

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DDIMInverseScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_inverse.py#L234</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DDIMInverseScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_inverse.py#L251</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DDIMInverseScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_inverse.py#L289</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **eta** (`float`) --
  The weight of noise for added noise in diffusion step.
- **use_clipped_model_output** (`bool`, defaults to `False`) --
  If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary
  because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no
  clipping has happened, "corrected" `model_output` would coincide with the one provided as input and
  `use_clipped_model_output` has no effect.
- **variance_noise** (`torch.Tensor`) --
  Alternative to generating noise with `generator` by directly providing the noise for the variance
  itself. Useful for methods such as `CycleDiffusion`.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a `~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput` or
  `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddim_inverse.md" />

### DDPMScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/ddpm.md

# DDPMScheduler

[Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2006.11239) (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.

The abstract from the paper is:

*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at [this https URL](https://github.com/hojonathanho/diffusion).*

## DDPMScheduler[[diffusers.DDPMScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DDPMScheduler</name><anchor>diffusers.DDPMScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py#L129</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "variance_type", "val": ": str = 'fixed_small'"}, {"name": "clip_sample", "val": ": bool = True"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, `squaredcos_cap_v2`, or `sigmoid`.
- **trained_betas** (`np.ndarray`, *optional*) --
  An array of betas to pass directly to the constructor without using `beta_start` and `beta_end`.
- **variance_type** (`str`, defaults to `"fixed_small"`) --
  Clip the variance when adding noise to the denoised sample. Choose from `fixed_small`, `fixed_small_log`,
  `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`DDPMScheduler` explores the connections between denoising score matching and Langevin dynamics sampling.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DDPMScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py#L234</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DDPMScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py#L251</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model. If used,
  `timesteps` must be `None`.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
  timestep spacing strategy of equal spacing between timesteps is used. If `timesteps` is passed,
  `num_inference_steps` must be `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DDPMScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py#L398</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`float`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [DDPMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[DDPMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [DDPMSchedulerOutput](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## DDPMSchedulerOutput[[diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py#L31</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
  `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/ddpm.md" />

### TCDScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/tcd.md

# TCDScheduler

[Trajectory Consistency Distillation](https://huggingface.co/papers/2402.19159) by Jianbin Zheng, Minghui Hu, Zhongyi Fan, Chaoyue Wang, Changxing Ding, Dacheng Tao and Tat-Jen Cham introduced a Strategic Stochastic Sampling (Algorithm 4) that is capable of generating good samples in a small number of steps. Distinguishing it as an advanced iteration of the multistep scheduler (Algorithm 1) in the [Consistency Models](https://huggingface.co/papers/2303.01469), Strategic Stochastic Sampling specifically tailored for the trajectory consistency function.

The abstract from the paper is:

*Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy. To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function and strategic stochastic sampling. The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE. Additionally, strategic stochastic sampling is specifically designed to circumvent the accumulated errors inherent in multi-step consistency sampling, which is meticulously tailored to complement the TCD model. Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs.*

The original codebase can be found at [jabir-zheng/TCD](https://github.com/jabir-zheng/TCD).

## TCDScheduler[[diffusers.TCDScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.TCDScheduler</name><anchor>diffusers.TCDScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L133</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'scaled_linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "original_inference_steps", "val": ": int = 50"}, {"name": "clip_sample", "val": ": bool = False"}, {"name": "clip_sample_range", "val": ": float = 1.0"}, {"name": "set_alpha_to_one", "val": ": bool = True"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "timestep_spacing", "val": ": str = 'leading'"}, {"name": "timestep_scaling", "val": ": float = 10.0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **original_inference_steps** (`int`, *optional*, defaults to 50) --
  The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
  will ultimately take `num_inference_steps` evenly spaced timesteps to form the final timestep schedule.
- **clip_sample** (`bool`, defaults to `True`) --
  Clip the predicted sample for numerical stability.
- **clip_sample_range** (`float`, defaults to 1.0) --
  The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
- **set_alpha_to_one** (`bool`, defaults to `True`) --
  Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
  there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
  otherwise it uses the alpha value at step 0.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
- **timestep_spacing** (`str`, defaults to `"leading"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **timestep_scaling** (`float`, defaults to 10.0) --
  The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions
  `c_skip` and `c_out`. Increasing this will decrease the approximation error (although the approximation
  error at the default of `10.0` is already pretty small).
- **rescale_betas_zero_snr** (`bool`, defaults to `False`) --
  Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
  dark samples instead of limiting it to samples with medium brightness. Loosely related to
  [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506).</paramsdesc><paramgroups>0</paramgroups></docstring>

`TCDScheduler` incorporates the `Strategic Stochastic Sampling` introduced by the paper `Trajectory Consistency
Distillation`, extending the original Multistep Consistency Sampling to enable unrestricted trajectory traversal.

This code is based on the official repo of TCD(https://github.com/jabir-zheng/TCD).

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). [~ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin) takes care of storing all config
attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be
accessed via `scheduler.config.num_train_timesteps`. [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) provides general loading and saving
functionality via the [SchedulerMixin.save_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.save_pretrained) and [from_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.from_pretrained) functions.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.TCDScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L300</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.TCDScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L290</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.TCDScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L362</source><parameters>[{"name": "num_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "original_inference_steps", "val": ": typing.Optional[int] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "strength", "val": ": float = 1.0"}]</parameters><paramsdesc>- **num_inference_steps** (`int`, *optional*) --
  The number of diffusion steps used when generating samples with a pre-trained model. If used,
  `timesteps` must be `None`.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **original_inference_steps** (`int`, *optional*) --
  The original number of inference steps, which will be used to generate a linearly-spaced timestep
  schedule (which is different from the standard `diffusers` implementation). We will then take
  `num_inference_steps` timesteps from this schedule, evenly spaced in terms of indices, and use that as
  our final timestep schedule. If not set, this will default to the `original_inference_steps` attribute.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
  timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep
  schedule is used. If `timesteps` is passed, `num_inference_steps` must be `None`.
- **strength** (`float`, *optional*, defaults to 1.0) --
  Used to determine the number of timesteps used for inference when using img2img, inpaint, etc.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.TCDScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L524</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": int"}, {"name": "sample", "val": ": Tensor"}, {"name": "eta", "val": ": float = 0.3"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **eta** (`float`) --
  A stochastic parameter (referred to as `gamma` in the paper) used to control the stochasticity in every
  step. When eta = 0, it represents deterministic sampling, whereas eta = 1 indicates full stochastic
  sampling.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a [TCDSchedulerOutput](/docs/diffusers/main/en/api/schedulers/tcd#diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_utils.TCDSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, [TCDSchedulerOutput](/docs/diffusers/main/en/api/schedulers/tcd#diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## TCDSchedulerOutput[[diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_tcd.py#L35</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_noised_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.
- **pred_noised_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  The predicted noised sample `(x_{s})` based on the model output from the current timestep.</paramsdesc><paramgroups>0</paramgroups></docstring>

Output class for the scheduler's `step` function output.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/tcd.md" />

### DPMSolverSDEScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/dpm_sde.md

# DPMSolverSDEScheduler

The `DPMSolverSDEScheduler` is inspired by the stochastic sampler from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper, and the scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/).

## DPMSolverSDEScheduler[[diffusers.DPMSolverSDEScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DPMSolverSDEScheduler</name><anchor>diffusers.DPMSolverSDEScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_sde.py#L161</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.00085"}, {"name": "beta_end", "val": ": float = 0.012"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "noise_sampler_seed", "val": ": typing.Optional[int] = None"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "steps_offset", "val": ": int = 0"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.00085) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.012) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear` or `scaled_linear`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **noise_sampler_seed** (`int`, *optional*, defaults to `None`) --
  The random seed to use for the noise sampler. If `None`, a random seed is generated.
- **timestep_spacing** (`str`, defaults to `"linspace"`) --
  The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
  Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
- **steps_offset** (`int`, defaults to 0) --
  An offset added to the inference steps, as required by some model families.</paramsdesc><paramgroups>0</paramgroups></docstring>

DPMSolverSDEScheduler implements the stochastic sampler from the [Elucidating the Design Space of Diffusion-Based
Generative Models](https://huggingface.co/papers/2206.00364) paper.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DPMSolverSDEScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_sde.py#L309</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.DPMSolverSDEScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_sde.py#L299</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DPMSolverSDEScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_sde.py#L336</source><parameters>[{"name": "num_inference_steps", "val": ": int"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "num_train_timesteps", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DPMSolverSDEScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_sde.py#L526</source><parameters>[{"name": "model_output", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": typing.Union[torch.Tensor, numpy.ndarray]"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "s_noise", "val": ": float = 1.0"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor` or `np.ndarray`) --
  The direct output from learned diffusion model.
- **timestep** (`float` or `torch.Tensor`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor` or `np.ndarray`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a `DPMSolverSDESchedulerOutput` or
  tuple.
- **s_noise** (`float`, *optional*, defaults to 1.0) --
  Scaling factor for noise added to the sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`DPMSolverSDESchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`, `DPMSolverSDESchedulerOutput` is
returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/dpm_sde.md" />

### ConsistencyDecoderScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/consistency_decoder.md

# ConsistencyDecoderScheduler

This scheduler is a part of the `ConsistencyDecoderPipeline` and was introduced in [DALL-E 3](https://openai.com/dall-e-3).

The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models).


## ConsistencyDecoderScheduler[[diffusers.schedulers.ConsistencyDecoderScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.ConsistencyDecoderScheduler</name><anchor>diffusers.schedulers.ConsistencyDecoderScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_decoder.py#L72</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1024"}, {"name": "sigma_data", "val": ": float = 0.5"}]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.schedulers.ConsistencyDecoderScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_decoder.py#L116</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.
- **timestep** (`int`, *optional*) --
  The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.schedulers.ConsistencyDecoderScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_consistency_decoder.py#L133</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`float`) --
  The current timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **generator** (`torch.Generator`, *optional*) --
  A random number generator.
- **return_dict** (`bool`, *optional*, defaults to `True`) --
  Whether or not to return a
  `~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput` or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>`~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput` or `tuple`</rettype><retdesc>If return_dict is `True`,
`~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput` is returned, otherwise
a tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/consistency_decoder.md" />

### DPMSolverSinglestepScheduler
https://huggingface.co/docs/diffusers/main/api/schedulers/singlestep_dpm_solver.md

# DPMSolverSinglestepScheduler

`DPMSolverSinglestepScheduler` is a single step scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.

DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
samples, and it can generate quite good samples even in 10 steps.

The original implementation can be found at [LuChengTHU/dpm-solver](https://github.com/LuChengTHU/dpm-solver).

## Tips

It is recommended to set `solver_order` to 2 for guide sampling, and `solver_order=3` for unconditional sampling.

Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space
diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use dynamic
thresholding. This thresholding method is unsuitable for latent-space diffusion models such as
Stable Diffusion.

## DPMSolverSinglestepScheduler[[diffusers.DPMSolverSinglestepScheduler]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.DPMSolverSinglestepScheduler</name><anchor>diffusers.DPMSolverSinglestepScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L80</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Optional[numpy.ndarray] = None"}, {"name": "solver_order", "val": ": int = 2"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "thresholding", "val": ": bool = False"}, {"name": "dynamic_thresholding_ratio", "val": ": float = 0.995"}, {"name": "sample_max_value", "val": ": float = 1.0"}, {"name": "algorithm_type", "val": ": str = 'dpmsolver++'"}, {"name": "solver_type", "val": ": str = 'midpoint'"}, {"name": "lower_order_final", "val": ": bool = False"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_flow_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "flow_shift", "val": ": typing.Optional[float] = 1.0"}, {"name": "final_sigmas_type", "val": ": typing.Optional[str] = 'zero'"}, {"name": "lambda_min_clipped", "val": ": float = -inf"}, {"name": "variance_type", "val": ": typing.Optional[str] = None"}, {"name": "use_dynamic_shifting", "val": ": bool = False"}, {"name": "time_shift_type", "val": ": str = 'exponential'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) --
  The number of diffusion steps to train the model.
- **beta_start** (`float`, defaults to 0.0001) --
  The starting `beta` value of inference.
- **beta_end** (`float`, defaults to 0.02) --
  The final `beta` value.
- **beta_schedule** (`str`, defaults to `"linear"`) --
  The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
  `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- **trained_betas** (`np.ndarray`, *optional*) --
  Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
- **solver_order** (`int`, defaults to 2) --
  The DPMSolver order which can be `1` or `2` or `3`. It is recommended to use `solver_order=2` for guided
  sampling, and `solver_order=3` for unconditional sampling.
- **prediction_type** (`str`, defaults to `epsilon`, *optional*) --
  Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
  `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
  Video](https://imagen.research.google/video/paper.pdf) paper).
- **thresholding** (`bool`, defaults to `False`) --
  Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
  as Stable Diffusion.
- **dynamic_thresholding_ratio** (`float`, defaults to 0.995) --
  The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
- **sample_max_value** (`float`, defaults to 1.0) --
  The threshold value for dynamic thresholding. Valid only when `thresholding=True` and
  `algorithm_type="dpmsolver++"`.
- **algorithm_type** (`str`, defaults to `dpmsolver++`) --
  Algorithm type for the solver; can be `dpmsolver` or `dpmsolver++` or `sde-dpmsolver++`. The `dpmsolver`
  type implements the algorithms in the [DPMSolver](https://huggingface.co/papers/2206.00927) paper, and the
  `dpmsolver++` type implements the algorithms in the [DPMSolver++](https://huggingface.co/papers/2211.01095)
  paper. It is recommended to use `dpmsolver++` or `sde-dpmsolver++` with `solver_order=2` for guided
  sampling like in Stable Diffusion.
- **solver_type** (`str`, defaults to `midpoint`) --
  Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the
  sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers.
- **lower_order_final** (`bool`, defaults to `True`) --
  Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can
  stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10.
- **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
  the sigmas are determined according to a sequence of noise levels {σi}.
- **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
- **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) --
  Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
  Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
- **final_sigmas_type** (`str`, *optional*, defaults to `"zero"`) --
  The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final
  sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.
- **lambda_min_clipped** (`float`, defaults to `-inf`) --
  Clipping threshold for the minimum value of `lambda(t)` for numerical stability. This is critical for the
  cosine (`squaredcos_cap_v2`) noise schedule.
- **variance_type** (`str`, *optional*) --
  Set to "learned" or "learned_range" for diffusion models that predict variance. If set, the model's output
  contains the predicted Gaussian variance.</paramsdesc><paramgroups>0</paramgroups></docstring>

`DPMSolverSinglestepScheduler` is a fast dedicated high-order solver for diffusion ODEs.

This model inherits from [SchedulerMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>convert_model_output</name><anchor>diffusers.DPMSolverSinglestepScheduler.convert_model_output</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L559</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The converted model output.</retdesc></docstring>

Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an
integral of the data prediction model.

> [!TIP] > The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both
noise > prediction and data prediction models.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>dpm_solver_first_order_update</name><anchor>diffusers.DPMSolverSinglestepScheduler.dpm_solver_first_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L655</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from the learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the first-order DPMSolver (equivalent to DDIM).








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_order_list</name><anchor>diffusers.DPMSolverSinglestepScheduler.get_order_list</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L237</source><parameters>[{"name": "num_inference_steps", "val": ": int"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.</paramsdesc><paramgroups>0</paramgroups></docstring>

Computes the solver order at each time step.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>scale_model_input</name><anchor>diffusers.DPMSolverSinglestepScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L1119</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) --
  The input sample.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring>

Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_begin_index</name><anchor>diffusers.DPMSolverSinglestepScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L292</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) --
  The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the begin index for the scheduler. This function should be run from pipeline before the inference.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_timesteps</name><anchor>diffusers.DPMSolverSinglestepScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L302</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "mu", "val": ": typing.Optional[float] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) --
  The number of diffusion steps used when generating samples with a pre-trained model.
- **device** (`str` or `torch.device`, *optional*) --
  The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
- **timesteps** (`List[int]`, *optional*) --
  Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
  timestep spacing strategy of equal spacing between timesteps schedule is used. If `timesteps` is
  passed, `num_inference_steps` must be `None`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets the discrete timesteps used for the diffusion chain (to be run before inference).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>singlestep_dpm_solver_second_order_update</name><anchor>diffusers.DPMSolverSinglestepScheduler.singlestep_dpm_solver_second_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L719</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **timestep** (`int`) --
  The current and latter discrete timestep in the diffusion chain.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the second-order singlestep DPMSolver that computes the solution at time `prev_timestep` from the
time `timestep_list[-2]`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>singlestep_dpm_solver_third_order_update</name><anchor>diffusers.DPMSolverSinglestepScheduler.singlestep_dpm_solver_third_order_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L830</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **timestep** (`int`) --
  The current and latter discrete timestep in the diffusion chain.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the third-order singlestep DPMSolver that computes the solution at time `prev_timestep` from the
time `timestep_list[-3]`.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>singlestep_dpm_solver_update</name><anchor>diffusers.DPMSolverSinglestepScheduler.singlestep_dpm_solver_update</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L952</source><parameters>[{"name": "model_output_list", "val": ": typing.List[torch.Tensor]"}, {"name": "*args", "val": ""}, {"name": "sample", "val": ": Tensor = None"}, {"name": "order", "val": ": int = None"}, {"name": "noise", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **model_output_list** (`List[torch.Tensor]`) --
  The direct outputs from learned diffusion model at current and latter timesteps.
- **timestep** (`int`) --
  The current and latter discrete timestep in the diffusion chain.
- **prev_timestep** (`int`) --
  The previous discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by diffusion process.
- **order** (`int`) --
  The solver order at this step.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The sample tensor at the previous timestep.</retdesc></docstring>

One step for the singlestep DPMSolver.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>step</name><anchor>diffusers.DPMSolverSinglestepScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_singlestep.py#L1048</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[int, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "generator", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) --
  The direct output from learned diffusion model.
- **timestep** (`int`) --
  The current discrete timestep in the diffusion chain.
- **sample** (`torch.Tensor`) --
  A current instance of a sample created by the diffusion process.
- **return_dict** (`bool`) --
  Whether or not to return a [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`.</paramsdesc><paramgroups>0</paramgroups><rettype>[SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [SchedulerOutput](/docs/diffusers/main/en/api/schedulers/edm_multistep_dpm_solver#diffusers.schedulers.scheduling_utils.SchedulerOutput) is returned, otherwise a
tuple is returned where the first element is the sample tensor.</retdesc></docstring>

Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with
the singlestep DPMSolver.








</div></div>

## SchedulerOutput[[diffusers.schedulers.scheduling_utils.SchedulerOutput]]
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.schedulers.scheduling_utils.SchedulerOutput</name><anchor>diffusers.schedulers.scheduling_utils.SchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_utils.py#L62</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) --
  Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
  denoising loop.</paramsdesc><paramgroups>0</paramgroups></docstring>

Base class for the output of a scheduler's `step` function.




</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/singlestep_dpm_solver.md" />

### IP-Adapter
https://huggingface.co/docs/diffusers/main/api/loaders/ip_adapter.md

# IP-Adapter

[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder.

> [!TIP]
> Learn how to load and use an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/ip_adapter) guide,.

## IPAdapterMixin[[diffusers.loaders.IPAdapterMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.IPAdapterMixin</name><anchor>diffusers.loaders.IPAdapterMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L54</source><parameters>[]</parameters></docstring>
Mixin for handling IP Adapters.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_ip_adapter</name><anchor>diffusers.loaders.IPAdapterMixin.load_ip_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L57</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor]]"}, {"name": "subfolder", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "weight_name", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "image_encoder_folder", "val": ": typing.Optional[str] = 'image_encoder'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `List[str]` or `os.PathLike` or `List[os.PathLike]` or `dict` or `List[dict]`) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).
- **subfolder** (`str` or `List[str]`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally. If a
  list is passed, it should have the same length as `weight_name`.
- **weight_name** (`str` or `List[str]`) --
  The name of the weight file to load. If a list is passed, it should have the same length as
  `subfolder`.
- **image_encoder_folder** (`str`, *optional*, defaults to `image_encoder`) --
  The subfolder location of the image encoder within a larger model repository on the Hub or locally.
  Pass `None` to not load the image encoder. If the image encoder is located in a folder inside
  `subfolder`, you only need to pass the name of the folder that contains image encoder weights, e.g.
  `image_encoder_folder="image_encoder"`. If the image encoder is located in a folder other than
  `subfolder`, you should pass the path to the folder that contains image encoder weights, for example,
  `image_encoder_folder="different_subfolder/image_encoder"`.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_ip_adapter_scale</name><anchor>diffusers.loaders.IPAdapterMixin.set_ip_adapter_scale</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L252</source><parameters>[{"name": "scale", "val": ""}]</parameters></docstring>

Set IP-Adapter scales per-transformer block. Input `scale` could be a single config or a list of configs for
granular control over each IP-Adapter behavior. A config can be a float or a dictionary.

<ExampleCodeBlock anchor="diffusers.loaders.IPAdapterMixin.set_ip_adapter_scale.example">

Example:

```py
# To use original IP-Adapter
scale = 1.0
pipeline.set_ip_adapter_scale(scale)

# To use style block only
scale = {
    "up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)

# To use style+layout blocks
scale = {
    "down": {"block_2": [0.0, 1.0]},
    "up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)

# To use style and layout from 2 reference images
scales = [{"down": {"block_2": [0.0, 1.0]}}, {"up": {"block_0": [0.0, 1.0, 0.0]}}]
pipeline.set_ip_adapter_scale(scales)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_ip_adapter</name><anchor>diffusers.loaders.IPAdapterMixin.unload_ip_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L305</source><parameters>[]</parameters></docstring>

Unloads the IP Adapter weights

<ExampleCodeBlock anchor="diffusers.loaders.IPAdapterMixin.unload_ip_adapter.example">

Examples:

```python
>>> # Assuming `pipeline` is already loaded with the IP Adapter weights.
>>> pipeline.unload_ip_adapter()
>>> ...
```

</ExampleCodeBlock>


</div></div>

## SD3IPAdapterMixin[[diffusers.loaders.SD3IPAdapterMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.SD3IPAdapterMixin</name><anchor>diffusers.loaders.SD3IPAdapterMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L897</source><parameters>[]</parameters></docstring>
Mixin for handling StableDiffusion 3 IP Adapters.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>is_ip_adapter_active</name><anchor>diffusers.loaders.SD3IPAdapterMixin.is_ip_adapter_active</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L900</source><parameters>[]</parameters><rettype>`bool`</rettype><retdesc>True when IP-Adapter is loaded and any layer has scale > 0.</retdesc></docstring>
Checks if IP-Adapter is loaded and scale > 0.

IP-Adapter scale controls the influence of the image prompt versus text prompt. When this value is set to 0,
the image context is irrelevant.






</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_ip_adapter</name><anchor>diffusers.loaders.SD3IPAdapterMixin.load_ip_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L918</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "weight_name", "val": ": str = 'ip-adapter.safetensors'"}, {"name": "subfolder", "val": ": typing.Optional[str] = None"}, {"name": "image_encoder_folder", "val": ": typing.Optional[str] = 'image_encoder'"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  Can be either:
  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).
- **weight_name** (`str`, defaults to "ip-adapter.safetensors") --
  The name of the weight file to load. If a list is passed, it should have the same length as
  `subfolder`.
- **subfolder** (`str`, *optional*) --
  The subfolder location of a model file within a larger model repository on the Hub or locally. If a
  list is passed, it should have the same length as `weight_name`.
- **image_encoder_folder** (`str`, *optional*, defaults to `image_encoder`) --
  The subfolder location of the image encoder within a larger model repository on the Hub or locally.
  Pass `None` to not load the image encoder. If the image encoder is located in a folder inside
  `subfolder`, you only need to pass the name of the folder that contains image encoder weights, e.g.
  `image_encoder_folder="image_encoder"`. If the image encoder is located in a folder other than
  `subfolder`, you should pass the path to the folder that contains image encoder weights, for example,
  `image_encoder_folder="different_subfolder/image_encoder"`.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.</paramsdesc><paramgroups>0</paramgroups></docstring>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_ip_adapter_scale</name><anchor>diffusers.loaders.SD3IPAdapterMixin.set_ip_adapter_scale</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L1066</source><parameters>[{"name": "scale", "val": ": float"}]</parameters><paramsdesc>- **scale** (float) --
  IP-Adapter scale to be set.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set IP-Adapter scale, which controls image prompt conditioning. A value of 1.0 means the model is only
conditioned on the image prompt, and 0.0 only conditioned by the text prompt. Lowering this value encourages
the model to produce more diverse images, but they may not be as aligned with the image prompt.

<ExampleCodeBlock anchor="diffusers.loaders.SD3IPAdapterMixin.set_ip_adapter_scale.example">

Example:

```python
>>> # Assuming `pipeline` is already loaded with the IP Adapter weights.
>>> pipeline.set_ip_adapter_scale(0.6)
>>> ...
```

</ExampleCodeBlock>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_ip_adapter</name><anchor>diffusers.loaders.SD3IPAdapterMixin.unload_ip_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/ip_adapter.py#L1089</source><parameters>[]</parameters></docstring>

Unloads the IP Adapter weights.

<ExampleCodeBlock anchor="diffusers.loaders.SD3IPAdapterMixin.unload_ip_adapter.example">

Example:

```python
>>> # Assuming `pipeline` is already loaded with the IP Adapter weights.
>>> pipeline.unload_ip_adapter()
>>> ...
```

</ExampleCodeBlock>


</div></div>

## IPAdapterMaskProcessor[[diffusers.image_processor.IPAdapterMaskProcessor]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.image_processor.IPAdapterMaskProcessor</name><anchor>diffusers.image_processor.IPAdapterMaskProcessor</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1253</source><parameters>[{"name": "do_resize", "val": ": bool = True"}, {"name": "vae_scale_factor", "val": ": int = 8"}, {"name": "resample", "val": ": str = 'lanczos'"}, {"name": "do_normalize", "val": ": bool = False"}, {"name": "do_binarize", "val": ": bool = True"}, {"name": "do_convert_grayscale", "val": ": bool = True"}]</parameters><paramsdesc>- **do_resize** (`bool`, *optional*, defaults to `True`) --
  Whether to downscale the image's (height, width) dimensions to multiples of `vae_scale_factor`.
- **vae_scale_factor** (`int`, *optional*, defaults to `8`) --
  VAE scale factor. If `do_resize` is `True`, the image is automatically resized to multiples of this factor.
- **resample** (`str`, *optional*, defaults to `lanczos`) --
  Resampling filter to use when resizing the image.
- **do_normalize** (`bool`, *optional*, defaults to `False`) --
  Whether to normalize the image to [-1,1].
- **do_binarize** (`bool`, *optional*, defaults to `True`) --
  Whether to binarize the image to 0/1.
- **do_convert_grayscale** (`bool`, *optional*, defaults to be `True`) --
  Whether to convert the images to grayscale format.</paramsdesc><paramgroups>0</paramgroups></docstring>

Image processor for IP Adapter image masks.





<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>downsample</name><anchor>diffusers.image_processor.IPAdapterMaskProcessor.downsample</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/image_processor.py#L1294</source><parameters>[{"name": "mask", "val": ": Tensor"}, {"name": "batch_size", "val": ": int"}, {"name": "num_queries", "val": ": int"}, {"name": "value_embed_dim", "val": ": int"}]</parameters><paramsdesc>- **mask** (`torch.Tensor`) --
  The input mask tensor generated with `IPAdapterMaskProcessor.preprocess()`.
- **batch_size** (`int`) --
  The batch size.
- **num_queries** (`int`) --
  The number of queries.
- **value_embed_dim** (`int`) --
  The dimensionality of the value embeddings.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>The downsampled mask tensor.</retdesc></docstring>

Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. If the
aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued.








</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/ip_adapter.md" />

### UNet
https://huggingface.co/docs/diffusers/main/api/loaders/unet.md

# UNet

Some training methods - like LoRA and Custom Diffusion - typically target the UNet's attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model's parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you're *only* loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) function instead.

The `UNet2DConditionLoadersMixin` class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters.

> [!TIP]
> To learn more about how to load LoRA weights, see the [LoRA](../../tutorials/using_peft_for_inference) guide.

## UNet2DConditionLoadersMixin[[diffusers.loaders.UNet2DConditionLoadersMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.UNet2DConditionLoadersMixin</name><anchor>diffusers.loaders.UNet2DConditionLoadersMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/unet.py#L60</source><parameters>[]</parameters></docstring>

Load LoRA layers into a `UNet2DCondtionModel`.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_attn_procs</name><anchor>diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/unet.py#L68</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  Can be either:

  - A string, the model id (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a directory (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **adapter_name** (`str`, *optional*, defaults to None) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **weight_name** (`str`, *optional*, defaults to None) --
  Name of the serialized state dict file.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load pretrained attention processor layers into [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel). Attention processor layers have to be
defined in
[`attention_processor.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py)
and be a `torch.nn.Module` class. Currently supported: LoRA, Custom Diffusion. For LoRA, one must install
`peft`: `pip install -U peft`.



<ExampleCodeBlock anchor="diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.unet.load_attn_procs(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_attn_procs</name><anchor>diffusers.loaders.UNet2DConditionLoadersMixin.save_attn_procs</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/unet.py#L413</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save an attention processor to (will be created if it doesn't exist).
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or with `pickle`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save attention processor layers to a directory so that it can be reloaded with the
[load_attn_procs()](/docs/diffusers/main/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs) method.



<ExampleCodeBlock anchor="diffusers.loaders.UNet2DConditionLoadersMixin.save_attn_procs.example">

Example:

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")
pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/unet.md" />

### LoRA
https://huggingface.co/docs/diffusers/main/api/loaders/lora.md

# LoRA

LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the denoiser, text encoder or both. The denoiser usually corresponds to a UNet ([UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel), for example) or a Transformer ([SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel), for example). There are several classes for loading LoRA weights:

- `StableDiffusionLoraLoaderMixin` provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
- `StableDiffusionXLLoraLoaderMixin` is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the `StableDiffusionLoraLoaderMixin` class for loading and saving LoRA weights. It can only be used with the SDXL model.
- `SD3LoraLoaderMixin` provides similar functions for [Stable Diffusion 3](https://huggingface.co/blog/sd3).
- `FluxLoraLoaderMixin` provides similar functions for [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux).
- `CogVideoXLoraLoaderMixin` provides similar functions for [CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox).
- `Mochi1LoraLoaderMixin` provides similar functions for [Mochi](https://huggingface.co/docs/diffusers/main/en/api/pipelines/mochi).
- `AuraFlowLoraLoaderMixin` provides similar functions for [AuraFlow](https://huggingface.co/fal/AuraFlow).
- `LTXVideoLoraLoaderMixin` provides similar functions for [LTX-Video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
- `SanaLoraLoaderMixin` provides similar functions for [Sana](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana).
- `HunyuanVideoLoraLoaderMixin` provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video).
- `Lumina2LoraLoaderMixin` provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
- `WanLoraLoaderMixin` provides similar functions for [Wan](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan).
- `SkyReelsV2LoraLoaderMixin` provides similar functions for [SkyReels-V2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/skyreels_v2).
- `CogView4LoraLoaderMixin` provides similar functions for [CogView4](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogview4).
- `AmusedLoraLoaderMixin` is for the [AmusedPipeline](/docs/diffusers/main/en/api/pipelines/amused#diffusers.AmusedPipeline).
- `HiDreamImageLoraLoaderMixin` provides similar functions for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hidream)
- `QwenImageLoraLoaderMixin` provides similar functions for [Qwen Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/qwen)
- `LoraBaseMixin` provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.

> [!TIP]
> To learn more about how to load LoRA weights, see the [LoRA](../../tutorials/using_peft_for_inference) loading guide.

## LoraBaseMixin[[diffusers.loaders.lora_base.LoraBaseMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.lora_base.LoraBaseMixin</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L478</source><parameters>[]</parameters></docstring>
Utility class for handling LoRAs.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L838</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}]</parameters><paramsdesc>- **adapter_names** (`Union[List[str], str]`) --
  The names of the adapters to delete.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete an adapter's LoRA layers from the pipeline.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic"
)
pipeline.delete_adapters("cinematic")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.disable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L778</source><parameters>[]</parameters></docstring>

Disables the active LoRA layers of the pipeline.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.disable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.disable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L808</source><parameters>[]</parameters></docstring>

Enables the active LoRA layers of the pipeline.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.enable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.enable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora_hotswap</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L985</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **target_rank** (`int`) --
  The highest rank among all the adapters that will be loaded.
- **check_compiled** (`str`, *optional*, defaults to `"error"`) --
  How to handle a model that is already compiled. The check can return the following messages:
  - "error" (default): raise an error
  - "warn": issue a warning
  - "ignore": do nothing</paramsdesc><paramgroups>0</paramgroups></docstring>

Hotswap adapters without triggering recompilation of a model or if the ranks of the loaded adapters are
different.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L536</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** -- (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into.
- **lora_scale** (`float`, defaults to 1.0) --
  Controls how much to influence the outputs with the LoRA parameters.
- **safe_fusing** (`bool`, defaults to `False`) --
  Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them.
- **adapter_names** (`List[str]`, *optional*) --
  Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.</paramsdesc><paramgroups>0</paramgroups></docstring>

Fuses the LoRA parameters into the original parameters of the corresponding blocks.

> [!WARNING] > This is an experimental API.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora.example">

Example:

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.fuse_lora(lora_scale=0.7)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_active_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L876</source><parameters>[]</parameters></docstring>

Gets the list of the current active adapters.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters.example">

Example:

```python
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
).to("cuda")
pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipeline.get_active_adapters()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_list_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_list_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L909</source><parameters>[]</parameters></docstring>

Gets the current list of all available adapters in the pipeline.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L675</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}, {"name": "adapter_weights", "val": ": typing.Union[float, typing.Dict, typing.List[float], typing.List[typing.Dict], NoneType] = None"}]</parameters><paramsdesc>- **adapter_names** (`List[str]` or `str`) --
  The names of the adapters to use.
- **adapter_weights** (`Union[List[float], float]`, *optional*) --
  The adapter(s) weights to use with the UNet. If `None`, the weights are set to `1.0` for all the
  adapters.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the currently active adapters for use in the pipeline.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5])
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_lora_device</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L931</source><parameters>[{"name": "adapter_names", "val": ": typing.List[str]"}, {"name": "device", "val": ": typing.Union[torch.device, str, int]"}]</parameters><paramsdesc>- **adapter_names** (`List[str]`) --
  List of adapters to send device to.
- **device** (`Union[torch.device, str, int]`) --
  Device to send the adapters to. Can be either a torch device, a str or an integer.</paramsdesc><paramgroups>0</paramgroups></docstring>

Moves the LoRAs listed in `adapter_names` to a target device. Useful for offloading the LoRA to the CPU in case
you want to load multiple adapters and free some GPU memory.

After offloading the LoRA adapters to CPU, as long as the rest of the model is still on GPU, the LoRA adapters
can no longer be used for inference, as that would cause a device mismatch. Remember to set the device back to
GPU before using those LoRA adapters for inference.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device.example">

```python
>>> pipe.load_lora_weights(path_1, adapter_name="adapter-1")
>>> pipe.load_lora_weights(path_2, adapter_name="adapter-2")
>>> pipe.set_adapters("adapter-1")
>>> image_1 = pipe(**kwargs)
>>> # switch to adapter-2, offload adapter-1
>>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cpu")
>>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cuda:0")
>>> pipe.set_adapters("adapter-2")
>>> image_2 = pipe(**kwargs)
>>> # switch back to adapter-1, offload adapter-2
>>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cpu")
>>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cuda:0")
>>> pipe.set_adapters("adapter-1")
>>> ...
```

</ExampleCodeBlock>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L622</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from.
- **unfuse_unet** (`bool`, defaults to `True`) -- Whether to unfuse the UNet LoRA parameters.
- **unfuse_text_encoder** (`bool`, defaults to `True`) --
  Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the
  LoRA parameters then it won't have any effect.</paramsdesc><paramgroups>0</paramgroups></docstring>

Reverses the effect of
[`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora).

> [!WARNING] > This is an experimental API.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L513</source><parameters>[]</parameters></docstring>

Unloads the LoRA parameters.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights.example">

Examples:

```python
>>> # Assuming `pipeline` is already loaded with the LoRA parameters.
>>> pipeline.unload_lora_weights()
>>> ...
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>write_lora_layers</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.write_lora_layers</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L1008</source><parameters>[{"name": "state_dict", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "save_directory", "val": ": str"}, {"name": "is_main_process", "val": ": bool"}, {"name": "weight_name", "val": ": str"}, {"name": "save_function", "val": ": typing.Callable"}, {"name": "safe_serialization", "val": ": bool"}, {"name": "lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>
Writes the state dict of the LoRA layers (optionally with metadata) to disk.

</div></div>

## StableDiffusionLoraLoaderMixin[[diffusers.loaders.StableDiffusionLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.StableDiffusionLoraLoaderMixin</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L128</source><parameters>[]</parameters></docstring>

Load LoRA layers into Stable Diffusion [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and
[`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L411</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The key should be prefixed with an
  additional `text_encoder` to distinguish between unet lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **text_encoder** (`CLIPTextModel`) --
  The text encoder model to load the LoRA layers into.
- **prefix** (`str`) --
  Expected prefix of the `text_encoder` in the `state_dict`.
- **lora_scale** (`float`) --
  How much to scale the output of the lora linear layer before it is added with the output of the regular
  lora layer.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `text_encoder`




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_unet</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L350</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "unet", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The keys can either be indexed directly
  into the unet or prefixed with an additional `unet` which can be used to distinguish between text
  encoder lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **unet** (`UNet2DConditionModel`) --
  The UNet model to load the LoRA layers into.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `unet`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L138</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>
Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.unet` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details on how the state dict is
loaded into `self.unet`.

See [load_lora_into_text_encoder()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_text_encoder) for more details on how the state
dict is loaded into `self.text_encoder`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L239</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **weight_name** (`str`, *optional*, defaults to None) --
  Name of the serialized state dict file.
- **return_lora_metadata** (`bool`, *optional*, defaults to False) --
  When enabled, additionally return the LoRA adapter metadata, typically found in the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Return state dict for lora weights and the network alphas.

> [!WARNING] > We support loading A1111 formatted LoRA checkpoints in a limited capacity. > > This function is
experimental and might change in the future.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L469</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **unet_lora_adapter_metadata** --
  LoRA adapter metadata associated with the unet to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div></div>

## StableDiffusionXLLoraLoaderMixin[[diffusers.loaders.StableDiffusionXLLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.StableDiffusionXLLoraLoaderMixin</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L592</source><parameters>[]</parameters></docstring>

Load LoRA layers into Stable Diffusion XL [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel),
[`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and
[`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L958</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['unet', 'text_encoder', 'text_encoder_2']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L851</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The key should be prefixed with an
  additional `text_encoder` to distinguish between unet lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **text_encoder** (`CLIPTextModel`) --
  The text encoder model to load the LoRA layers into.
- **prefix** (`str`) --
  Expected prefix of the `text_encoder` in the `state_dict`.
- **lora_scale** (`float`) --
  How much to scale the output of the lora linear layer before it is added with the output of the regular
  lora layer.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `text_encoder`




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_unet</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_into_unet</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L789</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "unet", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The keys can either be indexed directly
  into the unet or prefixed with an additional `unet` which can be used to distinguish between text
  encoder lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **unet** (`UNet2DConditionModel`) --
  The UNet model to load the LoRA layers into.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `unet`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L603</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L677</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **weight_name** (`str`, *optional*, defaults to None) --
  Name of the serialized state dict file.
- **return_lora_metadata** (`bool`, *optional*, defaults to False) --
  When enabled, additionally return the LoRA adapter metadata, typically found in the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Return state dict for lora weights and the network alphas.

> [!WARNING] > We support loading A1111 formatted LoRA checkpoints in a limited capacity. > > This function is
experimental and might change in the future.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L910</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "unet_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_2_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "unet_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_2_lora_adapter_metadata", "val": " = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.StableDiffusionXLLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L977</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['unet', 'text_encoder', 'text_encoder_2']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## SD3LoraLoaderMixin[[diffusers.loaders.SD3LoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.SD3LoraLoaderMixin</name><anchor>diffusers.loaders.SD3LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L984</source><parameters>[]</parameters></docstring>

Load LoRA layers into [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel),
[`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), and
[`CLIPTextModelWithProjection`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection).

Specific to [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1256</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder', 'text_encoder_2']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1147</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The key should be prefixed with an
  additional `text_encoder` to distinguish between unet lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **text_encoder** (`CLIPTextModel`) --
  The text encoder model to load the LoRA layers into.
- **prefix** (`str`) --
  Expected prefix of the `text_encoder` in the `state_dict`.
- **lora_scale** (`float`) --
  How much to scale the output of the lora linear layer before it is added with the output of the regular
  lora layer.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `text_encoder`




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1116</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1051</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": " = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L997</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1206</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_2_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_2_lora_adapter_metadata", "val": " = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SD3LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1276</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder', 'text_encoder_2']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## FluxLoraLoaderMixin[[diffusers.loaders.FluxLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.FluxLoraLoaderMixin</name><anchor>diffusers.loaders.FluxLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1483</source><parameters>[]</parameters></docstring>

Load LoRA layers into [FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel),
[`CLIPTextModel`](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel).

Specific to [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1955</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_text_encoder</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_into_text_encoder</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1832</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "text_encoder", "val": ""}, {"name": "prefix", "val": " = None"}, {"name": "lora_scale", "val": " = 1.0"}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters><paramsdesc>- **state_dict** (`dict`) --
  A standard state dict containing the lora layer parameters. The key should be prefixed with an
  additional `text_encoder` to distinguish between unet lora layers.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **text_encoder** (`CLIPTextModel`) --
  The text encoder model to load the LoRA layers into.
- **prefix** (`str`) --
  Expected prefix of the `text_encoder` in the `state_dict`.
- **lora_scale** (`float`) --
  How much to scale the output of the lora linear layer before it is added with the output of the regular
  lora layer.
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **metadata** (`dict`) --
  Optional LoRA adapter metadata. When supplied, the `LoraConfig` arguments of `peft` won't be derived
  from the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

This will load the LoRA layers specified in `state_dict` into `text_encoder`




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1746</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "metadata", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1621</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).
- **adapter_name** (`str`, *optional*) --
  Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
  `default_{i}` where i is the total number of adapters being loaded.
- **low_cpu_mem_usage** (`bool`, *optional*) --
  `Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap** (`bool`, *optional*) --
  See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).
- **kwargs** (`dict`, *optional*) --
  See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict).</paramsdesc><paramgroups>0</paramgroups></docstring>

Load LoRA weights specified in `pretrained_model_name_or_path_or_dict` into `self.transformer` and
`self.text_encoder`.

All kwargs are forwarded to `self.lora_state_dict`.

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details on how the state dict is
loaded.

See `~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer` for more details on how the state
dict is loaded into `self.transformer`.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1496</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "return_alphas", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1891</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": " = None"}, {"name": "text_encoder_lora_adapter_metadata", "val": " = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **transformer_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `transformer`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **transformer_lora_adapter_metadata** --
  LoRA adapter metadata associated with the transformer to be serialized with the state dict.
- **text_encoder_lora_adapter_metadata** --
  LoRA adapter metadata associated with the text encoder to be serialized with the state dict.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1987</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder']"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from.</paramsdesc><paramgroups>0</paramgroups></docstring>

Reverses the effect of
[`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora).

> [!WARNING] > This is an experimental API.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.FluxLoraLoaderMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2004</source><parameters>[{"name": "reset_to_overwritten_params", "val": " = False"}]</parameters><paramsdesc>- **reset_to_overwritten_params** (`bool`, defaults to `False`) -- Whether to reset the LoRA-loaded modules
  to their original params. Refer to the [Flux
  documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) to learn more.</paramsdesc><paramgroups>0</paramgroups></docstring>

Unloads the LoRA parameters.



<ExampleCodeBlock anchor="diffusers.loaders.FluxLoraLoaderMixin.unload_lora_weights.example">

Examples:

```python
>>> # Assuming `pipeline` is already loaded with the LoRA parameters.
>>> pipeline.unload_lora_weights()
>>> ...
```

</ExampleCodeBlock>


</div></div>

## CogVideoXLoraLoaderMixin[[diffusers.loaders.CogVideoXLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.CogVideoXLoraLoaderMixin</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2436</source><parameters>[]</parameters></docstring>

Load LoRA layers into [CogVideoXTransformer3DModel](/docs/diffusers/main/en/api/models/cogvideox_transformer3d#diffusers.CogVideoXTransformer3DModel). Specific to [CogVideoXPipeline](/docs/diffusers/main/en/api/pipelines/cogvideox#diffusers.CogVideoXPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2606</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2540</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2499</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2444</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2572</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.CogVideoXLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2625</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## Mochi1LoraLoaderMixin[[diffusers.loaders.Mochi1LoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.Mochi1LoraLoaderMixin</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2632</source><parameters>[]</parameters></docstring>

Load LoRA layers into [MochiTransformer3DModel](/docs/diffusers/main/en/api/models/mochi_transformer3d#diffusers.MochiTransformer3DModel). Specific to [MochiPipeline](/docs/diffusers/main/en/api/pipelines/mochi#diffusers.MochiPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2805</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2737</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2696</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2640</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2769</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.Mochi1LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2825</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## AuraFlowLoraLoaderMixin[[diffusers.loaders.AuraFlowLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.AuraFlowLoraLoaderMixin</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1283</source><parameters>[]</parameters></docstring>

Load LoRA layers into [AuraFlowTransformer2DModel](/docs/diffusers/main/en/api/models/aura_flow_transformer2d#diffusers.AuraFlowTransformer2DModel) Specific to [AuraFlowPipeline](/docs/diffusers/main/en/api/pipelines/aura_flow#diffusers.AuraFlowPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1456</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1388</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1347</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1291</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1420</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.AuraFlowLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L1476</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer', 'text_encoder']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## LTXVideoLoraLoaderMixin[[diffusers.loaders.LTXVideoLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.LTXVideoLoraLoaderMixin</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2832</source><parameters>[]</parameters></docstring>

Load LoRA layers into [LTXVideoTransformer3DModel](/docs/diffusers/main/en/api/models/ltx_video_transformer3d#diffusers.LTXVideoTransformer3DModel). Specific to [LTXPipeline](/docs/diffusers/main/en/api/pipelines/ltx_video#diffusers.LTXPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3008</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2940</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2899</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2840</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2972</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.LTXVideoLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3028</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## SanaLoraLoaderMixin[[diffusers.loaders.SanaLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.SanaLoraLoaderMixin</name><anchor>diffusers.loaders.SanaLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3035</source><parameters>[]</parameters></docstring>

Load LoRA layers into [SanaTransformer2DModel](/docs/diffusers/main/en/api/models/sana_transformer2d#diffusers.SanaTransformer2DModel). Specific to [SanaPipeline](/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3208</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3140</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3099</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3043</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3172</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SanaLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3228</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## HunyuanVideoLoraLoaderMixin[[diffusers.loaders.HunyuanVideoLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.HunyuanVideoLoraLoaderMixin</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3235</source><parameters>[]</parameters></docstring>

Load LoRA layers into [HunyuanVideoTransformer3DModel](/docs/diffusers/main/en/api/models/hunyuan_video_transformer_3d#diffusers.HunyuanVideoTransformer3DModel). Specific to [HunyuanVideoPipeline](/docs/diffusers/main/en/api/pipelines/hunyuan_video#diffusers.HunyuanVideoPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3411</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3343</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3302</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3243</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3375</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.HunyuanVideoLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3431</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## Lumina2LoraLoaderMixin[[diffusers.loaders.Lumina2LoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.Lumina2LoraLoaderMixin</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3438</source><parameters>[]</parameters></docstring>

Load LoRA layers into [Lumina2Transformer2DModel](/docs/diffusers/main/en/api/models/lumina2_transformer2d#diffusers.Lumina2Transformer2DModel). Specific to `Lumina2Text2ImgPipeline`.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3615</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3547</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3506</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3446</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3579</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.Lumina2LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3635</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## CogView4LoraLoaderMixin[[diffusers.loaders.CogView4LoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.CogView4LoraLoaderMixin</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4193</source><parameters>[]</parameters></docstring>

Load LoRA layers into [WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel). Specific to [CogView4Pipeline](/docs/diffusers/main/en/api/pipelines/cogview4#diffusers.CogView4Pipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4366</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4298</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4257</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4201</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4330</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.CogView4LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4386</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## WanLoraLoaderMixin[[diffusers.loaders.WanLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.WanLoraLoaderMixin</name><anchor>diffusers.loaders.WanLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3642</source><parameters>[]</parameters></docstring>

Load LoRA layers into [WanTransformer3DModel](/docs/diffusers/main/en/api/models/wan_transformer_3d#diffusers.WanTransformer3DModel). Specific to [WanPipeline](/docs/diffusers/main/en/api/pipelines/wan#diffusers.WanPipeline) and `[WanImageToVideoPipeline`].



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.WanLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3889</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.WanLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3821</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.WanLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3756</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.WanLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3650</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.WanLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3853</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.WanLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3909</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## SkyReelsV2LoraLoaderMixin[[diffusers.loaders.SkyReelsV2LoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.SkyReelsV2LoraLoaderMixin</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3916</source><parameters>[]</parameters></docstring>

Load LoRA layers into [SkyReelsV2Transformer3DModel](/docs/diffusers/main/en/api/models/skyreels_v2_transformer_3d#diffusers.SkyReelsV2Transformer3DModel).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4166</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4098</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4033</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L3924</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4130</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.SkyReelsV2LoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4186</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## AmusedLoraLoaderMixin[[diffusers.loaders.AmusedLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.AmusedLoraLoaderMixin</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2284</source><parameters>[]</parameters></docstring>



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2289</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "network_alphas", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "metadata", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.AmusedLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L2381</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "text_encoder_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, torch.nn.modules.module.Module] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **unet_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `unet`.
- **text_encoder_lora_layers** (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`) --
  State dict of the LoRA layers corresponding to the `text_encoder`. Must explicitly pass the text
  encoder LoRA state dict because it comes from 🤗 Transformers.
- **is_main_process** (`bool`, *optional*, defaults to `True`) --
  Whether the process calling this is the main process or not. Useful during distributed training and you
  need to call this function on all processes. In this case, set `is_main_process=True` only on the main
  process to avoid race conditions.
- **save_function** (`Callable`) --
  The function to use to save the state dictionary. Useful during distributed training when you need to
  replace `torch.save` with another method. Can be configured with the environment variable
  `DIFFUSERS_SAVE_MODE`.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the UNet and text encoder.




</div></div>

## HiDreamImageLoraLoaderMixin[[diffusers.loaders.HiDreamImageLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.HiDreamImageLoraLoaderMixin</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4393</source><parameters>[]</parameters></docstring>

Load LoRA layers into [HiDreamImageTransformer2DModel](/docs/diffusers/main/en/api/models/hidream_image_transformer#diffusers.HiDreamImageTransformer2DModel). Specific to [HiDreamImagePipeline](/docs/diffusers/main/en/api/pipelines/hidream#diffusers.HiDreamImagePipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4569</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4501</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4460</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4401</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4533</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.HiDreamImageLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4589</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## QwenImageLoraLoaderMixin[[diffusers.loaders.QwenImageLoraLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.QwenImageLoraLoaderMixin</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4596</source><parameters>[]</parameters></docstring>

Load LoRA layers into [QwenImageTransformer2DModel](/docs/diffusers/main/en/api/models/qwenimage_transformer2d#diffusers.QwenImageTransformer2DModel). Specific to [QwenImagePipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImagePipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4774</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `fuse_lora()` for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_into_transformer</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.load_lora_into_transformer</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4706</source><parameters>[{"name": "state_dict", "val": ""}, {"name": "transformer", "val": ""}, {"name": "adapter_name", "val": " = None"}, {"name": "_pipeline", "val": " = None"}, {"name": "low_cpu_mem_usage", "val": " = False"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "metadata", "val": " = None"}]</parameters></docstring>

See [load_lora_into_unet()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_into_unet) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_weights</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.load_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4665</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "adapter_name", "val": ": typing.Optional[str] = None"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>lora_state_dict</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.lora_state_dict</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4604</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ": typing.Union[str, typing.Dict[str, torch.Tensor]]"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See [lora_state_dict()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.lora_state_dict) for more details.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_weights</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.save_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4738</source><parameters>[{"name": "save_directory", "val": ": typing.Union[str, os.PathLike]"}, {"name": "transformer_lora_layers", "val": ": typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None"}, {"name": "is_main_process", "val": ": bool = True"}, {"name": "weight_name", "val": ": str = None"}, {"name": "save_function", "val": ": typing.Callable = None"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "transformer_lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>

See [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.save_lora_weights) for more information.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.QwenImageLoraLoaderMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_pipeline.py#L4794</source><parameters>[{"name": "components", "val": ": typing.List[str] = ['transformer']"}, {"name": "**kwargs", "val": ""}]</parameters></docstring>

See `unfuse_lora()` for more details.


</div></div>

## LoraBaseMixin[[diffusers.loaders.lora_base.LoraBaseMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.lora_base.LoraBaseMixin</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L478</source><parameters>[]</parameters></docstring>
Utility class for handling LoRAs.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L838</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}]</parameters><paramsdesc>- **adapter_names** (`Union[List[str], str]`) --
  The names of the adapters to delete.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete an adapter's LoRA layers from the pipeline.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.delete_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic"
)
pipeline.delete_adapters("cinematic")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.disable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L778</source><parameters>[]</parameters></docstring>

Disables the active LoRA layers of the pipeline.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.disable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.disable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L808</source><parameters>[]</parameters></docstring>

Enables the active LoRA layers of the pipeline.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.enable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.enable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora_hotswap</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L985</source><parameters>[{"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **target_rank** (`int`) --
  The highest rank among all the adapters that will be loaded.
- **check_compiled** (`str`, *optional*, defaults to `"error"`) --
  How to handle a model that is already compiled. The check can return the following messages:
  - "error" (default): raise an error
  - "warn": issue a warning
  - "ignore": do nothing</paramsdesc><paramgroups>0</paramgroups></docstring>

Hotswap adapters without triggering recompilation of a model or if the ranks of the loaded adapters are
different.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>fuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L536</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "lora_scale", "val": ": float = 1.0"}, {"name": "safe_fusing", "val": ": bool = False"}, {"name": "adapter_names", "val": ": typing.Optional[typing.List[str]] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** -- (`List[str]`): List of LoRA-injectable components to fuse the LoRAs into.
- **lora_scale** (`float`, defaults to 1.0) --
  Controls how much to influence the outputs with the LoRA parameters.
- **safe_fusing** (`bool`, defaults to `False`) --
  Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them.
- **adapter_names** (`List[str]`, *optional*) --
  Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.</paramsdesc><paramgroups>0</paramgroups></docstring>

Fuses the LoRA parameters into the original parameters of the corresponding blocks.

> [!WARNING] > This is an experimental API.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora.example">

Example:

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.fuse_lora(lora_scale=0.7)
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_active_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L876</source><parameters>[]</parameters></docstring>

Gets the list of the current active adapters.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters.example">

Example:

```python
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
).to("cuda")
pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipeline.get_active_adapters()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>get_list_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.get_list_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L909</source><parameters>[]</parameters></docstring>

Gets the current list of all available adapters in the pipeline.


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapters</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L675</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}, {"name": "adapter_weights", "val": ": typing.Union[float, typing.Dict, typing.List[float], typing.List[typing.Dict], NoneType] = None"}]</parameters><paramsdesc>- **adapter_names** (`List[str]` or `str`) --
  The names of the adapters to use.
- **adapter_weights** (`Union[List[float], float]`, *optional*) --
  The adapter(s) weights to use with the UNet. If `None`, the weights are set to `1.0` for all the
  adapters.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the currently active adapters for use in the pipeline.



<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5])
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_lora_device</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L931</source><parameters>[{"name": "adapter_names", "val": ": typing.List[str]"}, {"name": "device", "val": ": typing.Union[torch.device, str, int]"}]</parameters><paramsdesc>- **adapter_names** (`List[str]`) --
  List of adapters to send device to.
- **device** (`Union[torch.device, str, int]`) --
  Device to send the adapters to. Can be either a torch device, a str or an integer.</paramsdesc><paramgroups>0</paramgroups></docstring>

Moves the LoRAs listed in `adapter_names` to a target device. Useful for offloading the LoRA to the CPU in case
you want to load multiple adapters and free some GPU memory.

After offloading the LoRA adapters to CPU, as long as the rest of the model is still on GPU, the LoRA adapters
can no longer be used for inference, as that would cause a device mismatch. Remember to set the device back to
GPU before using those LoRA adapters for inference.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.set_lora_device.example">

```python
>>> pipe.load_lora_weights(path_1, adapter_name="adapter-1")
>>> pipe.load_lora_weights(path_2, adapter_name="adapter-2")
>>> pipe.set_adapters("adapter-1")
>>> image_1 = pipe(**kwargs)
>>> # switch to adapter-2, offload adapter-1
>>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cpu")
>>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cuda:0")
>>> pipe.set_adapters("adapter-2")
>>> image_2 = pipe(**kwargs)
>>> # switch back to adapter-1, offload adapter-2
>>> pipeline.set_lora_device(adapter_names=["adapter-2"], device="cpu")
>>> pipeline.set_lora_device(adapter_names=["adapter-1"], device="cuda:0")
>>> pipe.set_adapters("adapter-1")
>>> ...
```

</ExampleCodeBlock>




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unfuse_lora</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unfuse_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L622</source><parameters>[{"name": "components", "val": ": typing.List[str] = []"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **components** (`List[str]`) -- List of LoRA-injectable components to unfuse LoRA from.
- **unfuse_unet** (`bool`, defaults to `True`) -- Whether to unfuse the UNet LoRA parameters.
- **unfuse_text_encoder** (`bool`, defaults to `True`) --
  Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn't monkey-patched with the
  LoRA parameters then it won't have any effect.</paramsdesc><paramgroups>0</paramgroups></docstring>

Reverses the effect of
[`pipe.fuse_lora()`](https://huggingface.co/docs/diffusers/main/en/api/loaders#diffusers.loaders.LoraBaseMixin.fuse_lora).

> [!WARNING] > This is an experimental API.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_lora_weights</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L513</source><parameters>[]</parameters></docstring>

Unloads the LoRA parameters.

<ExampleCodeBlock anchor="diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights.example">

Examples:

```python
>>> # Assuming `pipeline` is already loaded with the LoRA parameters.
>>> pipeline.unload_lora_weights()
>>> ...
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>write_lora_layers</name><anchor>diffusers.loaders.lora_base.LoraBaseMixin.write_lora_layers</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/lora_base.py#L1008</source><parameters>[{"name": "state_dict", "val": ": typing.Dict[str, torch.Tensor]"}, {"name": "save_directory", "val": ": str"}, {"name": "is_main_process", "val": ": bool"}, {"name": "weight_name", "val": ": str"}, {"name": "save_function", "val": ": typing.Callable"}, {"name": "safe_serialization", "val": ": bool"}, {"name": "lora_adapter_metadata", "val": ": typing.Optional[dict] = None"}]</parameters></docstring>
Writes the state dict of the LoRA layers (optionally with metadata) to disk.

</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/lora.md" />

### Textual Inversion
https://huggingface.co/docs/diffusers/main/api/loaders/textual_inversion.md

# Textual Inversion

Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder.

`TextualInversionLoaderMixin` provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings.

> [!TIP]
> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/textual_inversion_inference) loading guide.

## TextualInversionLoaderMixin[[diffusers.loaders.TextualInversionLoaderMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.TextualInversionLoaderMixin</name><anchor>diffusers.loaders.TextualInversionLoaderMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L110</source><parameters>[]</parameters></docstring>

Load Textual Inversion tokens and embeddings to the tokenizer and text encoder.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_textual_inversion</name><anchor>diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L263</source><parameters>[{"name": "pretrained_model_name_or_path", "val": ": typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]]"}, {"name": "token", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path** (`str` or `os.PathLike` or `List[str or os.PathLike]` or `Dict` or `List[Dict]`) --
  Can be either one of the following or a list of them:

  - A string, the *model id* (for example `sd-concepts-library/low-poly-hd-logos-icons`) of a
    pretrained model hosted on the Hub.
  - A path to a *directory* (for example `./my_text_inversion_directory/`) containing the textual
    inversion weights.
  - A path to a *file* (for example `./my_text_inversions.pt`) containing textual inversion weights.
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **token** (`str` or `List[str]`, *optional*) --
  Override the token to use for the textual inversion weights. If `pretrained_model_name_or_path` is a
  list, then `token` must also be a list of equal length.
- **text_encoder** ([CLIPTextModel](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTextModel), *optional*) --
  Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
  If not specified, function will take self.tokenizer.
- **tokenizer** ([CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer), *optional*) --
  A `CLIPTokenizer` to tokenize text. If not specified, function will take self.tokenizer.
- **weight_name** (`str`, *optional*) --
  Name of a custom weight file. This should be used when:

  - The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
    name such as `text_inv.bin`.
  - The saved textual inversion file is in the Automatic1111 format.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **hf_token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **mirror** (`str`, *optional*) --
  Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
  guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
  information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Load Textual Inversion embeddings into the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) (both 🤗 Diffusers and
Automatic1111 formats are supported).



Example:

<ExampleCodeBlock anchor="diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion.example">

To load a Textual Inversion embedding vector in 🤗 Diffusers format:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("sd-concepts-library/cat-toy")

prompt = "A <cat-toy> backpack"

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
```

</ExampleCodeBlock>

To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first
(for example from [civitAI](https://civitai.com/models/3036?modelVersionId=9857)) and then load the vector
<ExampleCodeBlock anchor="diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion.example-2">

locally:

```py
from diffusers import StableDiffusionPipeline
import torch

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")

prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."

image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
```

</ExampleCodeBlock>



</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>maybe_convert_prompt</name><anchor>diffusers.loaders.TextualInversionLoaderMixin.maybe_convert_prompt</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L115</source><parameters>[{"name": "prompt", "val": ": typing.Union[str, typing.List[str]]"}, {"name": "tokenizer", "val": ": PreTrainedTokenizer"}]</parameters><paramsdesc>- **prompt** (`str` or list of `str`) --
  The prompt or prompts to guide the image generation.
- **tokenizer** (`PreTrainedTokenizer`) --
  The tokenizer responsible for encoding the prompt into input tokens.</paramsdesc><paramgroups>0</paramgroups><rettype>`str` or list of `str`</rettype><retdesc>The converted prompt</retdesc></docstring>

Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to
be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual
inversion token or if the textual inversion token is a single vector, the input prompt is returned.








</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>unload_textual_inversion</name><anchor>diffusers.loaders.TextualInversionLoaderMixin.unload_textual_inversion</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/textual_inversion.py#L459</source><parameters>[{"name": "tokens", "val": ": typing.Union[str, typing.List[str], NoneType] = None"}, {"name": "tokenizer", "val": ": typing.Optional[ForwardRef('PreTrainedTokenizer')] = None"}, {"name": "text_encoder", "val": ": typing.Optional[ForwardRef('PreTrainedModel')] = None"}]</parameters></docstring>

Unload Textual Inversion embeddings from the text encoder of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)

<ExampleCodeBlock anchor="diffusers.loaders.TextualInversionLoaderMixin.unload_textual_inversion.example">

Example:
```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")

# Example 1
pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork")
pipeline.load_textual_inversion("sd-concepts-library/moeb-style")

# Remove all token embeddings
pipeline.unload_textual_inversion()

# Example 2
pipeline.load_textual_inversion("sd-concepts-library/moeb-style")
pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork")

# Remove just one token
pipeline.unload_textual_inversion("<moe-bius>")

# Example 3: unload from SDXL
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")
embedding_path = hf_hub_download(
    repo_id="linoyts/web_y2k", filename="web_y2k_emb.safetensors", repo_type="model"
)

# load embeddings to the text encoders
state_dict = load_file(embedding_path)

# load embeddings of text_encoder 1 (CLIP ViT-L/14)
pipeline.load_textual_inversion(
    state_dict["clip_l"],
    tokens=["<s0>", "<s1>"],
    text_encoder=pipeline.text_encoder,
    tokenizer=pipeline.tokenizer,
)
# load embeddings of text_encoder 2 (CLIP ViT-G/14)
pipeline.load_textual_inversion(
    state_dict["clip_g"],
    tokens=["<s0>", "<s1>"],
    text_encoder=pipeline.text_encoder_2,
    tokenizer=pipeline.tokenizer_2,
)

# Unload explicitly from both text encoders and tokenizers
pipeline.unload_textual_inversion(
    tokens=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer
)
pipeline.unload_textual_inversion(
    tokens=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2
)
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/textual_inversion.md" />

### PEFT
https://huggingface.co/docs/diffusers/main/api/loaders/peft.md

# PEFT

Diffusers supports loading adapters such as [LoRA](../../tutorials/using_peft_for_inference) with the [PEFT](https://huggingface.co/docs/peft/index) library with the [PeftAdapterMixin](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin) class. This allows modeling classes in Diffusers like [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel), [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel) to operate with an adapter.

> [!TIP]
> Refer to the [Inference with PEFT](../../tutorials/using_peft_for_inference.md) tutorial for an overview of how to use PEFT in Diffusers for inference.

## PeftAdapterMixin[[diffusers.loaders.PeftAdapterMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.PeftAdapterMixin</name><anchor>diffusers.loaders.PeftAdapterMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L68</source><parameters>[]</parameters></docstring>

A class containing all functions for loading and using adapters weights that are supported in PEFT library. For
more details about adapters and injecting them in a base model, check out the PEFT
[documentation](https://huggingface.co/docs/peft/index).

Install the latest version of PEFT, and use this mixin to:

- Attach new adapters in the model.
- Attach multiple adapters and iteratively activate/deactivate them.
- Activate/deactivate all adapters from the model.
- Get a list of the active adapters.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>active_adapters</name><anchor>diffusers.loaders.PeftAdapterMixin.active_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L636</source><parameters>[]</parameters></docstring>

Gets the current list of active adapters of the model.

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
[documentation](https://huggingface.co/docs/peft).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>add_adapter</name><anchor>diffusers.loaders.PeftAdapterMixin.add_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L504</source><parameters>[{"name": "adapter_config", "val": ""}, {"name": "adapter_name", "val": ": str = 'default'"}]</parameters><paramsdesc>- **adapter_config** (`[~peft.PeftConfig]`) --
  The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt
  methods.
- **adapter_name** (`str`, *optional*, defaults to `"default"`) --
  The name of the adapter to add. If no name is passed, a default name is assigned to the adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>

Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned
to the adapter to follow the convention of the PEFT library.

If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT
[documentation](https://huggingface.co/docs/peft).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>delete_adapters</name><anchor>diffusers.loaders.PeftAdapterMixin.delete_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L759</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}]</parameters><paramsdesc>- **adapter_names** (`Union[List[str], str]`) --
  The names (single string or list of strings) of the adapter to delete.</paramsdesc><paramgroups>0</paramgroups></docstring>

Delete an adapter's LoRA layers from the underlying model.



<ExampleCodeBlock anchor="diffusers.loaders.PeftAdapterMixin.delete_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic"
)
pipeline.unet.delete_adapters("cinematic")
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_adapters</name><anchor>diffusers.loaders.PeftAdapterMixin.disable_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L591</source><parameters>[]</parameters></docstring>

Disable all adapters attached to the model and fallback to inference with the base model only.

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
[documentation](https://huggingface.co/docs/peft).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>disable_lora</name><anchor>diffusers.loaders.PeftAdapterMixin.disable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L713</source><parameters>[]</parameters></docstring>

Disables the active LoRA layers of the underlying model.

<ExampleCodeBlock anchor="diffusers.loaders.PeftAdapterMixin.disable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.unet.disable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_adapters</name><anchor>diffusers.loaders.PeftAdapterMixin.enable_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L613</source><parameters>[]</parameters></docstring>

Enable adapters that are attached to the model. The model uses `self.active_adapters()` to retrieve the list of
adapters to enable.

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
[documentation](https://huggingface.co/docs/peft).


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora</name><anchor>diffusers.loaders.PeftAdapterMixin.enable_lora</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L736</source><parameters>[]</parameters></docstring>

Enables the active LoRA layers of the underlying model.

<ExampleCodeBlock anchor="diffusers.loaders.PeftAdapterMixin.enable_lora.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.unet.enable_lora()
```

</ExampleCodeBlock>


</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>enable_lora_hotswap</name><anchor>diffusers.loaders.PeftAdapterMixin.enable_lora_hotswap</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L795</source><parameters>[{"name": "target_rank", "val": ": int = 128"}, {"name": "check_compiled", "val": ": typing.Literal['error', 'warn', 'ignore'] = 'error'"}]</parameters><paramsdesc>- **target_rank** (`int`, *optional*, defaults to `128`) --
  The highest rank among all the adapters that will be loaded.

- **check_compiled** (`str`, *optional*, defaults to `"error"`) --
  How to handle the case when the model is already compiled, which should generally be avoided. The
  options are:
  - "error" (default): raise an error
  - "warn": issue a warning
  - "ignore": do nothing</paramsdesc><paramgroups>0</paramgroups></docstring>
Enables the possibility to hotswap LoRA adapters.

Calling this method is only required when hotswapping adapters and if the model is compiled or if the ranks of
the loaded adapters differ.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>load_lora_adapter</name><anchor>diffusers.loaders.PeftAdapterMixin.load_lora_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L91</source><parameters>[{"name": "pretrained_model_name_or_path_or_dict", "val": ""}, {"name": "prefix", "val": " = 'transformer'"}, {"name": "hotswap", "val": ": bool = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_name_or_path_or_dict** (`str` or `os.PathLike` or `dict`) --
  Can be either:

  - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
    the Hub.
  - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
    with [ModelMixin.save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).
  - A [torch state
    dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict).

- **prefix** (`str`, *optional*) -- Prefix to filter the state dict.

- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **network_alphas** (`Dict[str, float]`) --
  The value of the network alpha used for stable learning and preventing underflow. This value has the
  same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
  link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
- **low_cpu_mem_usage** (`bool`, *optional*) --
  Speed up model loading by only loading the pretrained LoRA weights and not initializing the random
  weights.
- **hotswap**  -- (`bool`, *optional*)
  Defaults to `False`. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter
  in-place. This means that, instead of loading an additional adapter, this will take the existing
  adapter weights and replace them with the weights of the new adapter. This can be faster and more
  memory efficient. However, the main advantage of hotswapping is that when the model is compiled with
  torch.compile, loading the new adapter does not require recompilation of the model. When using
  hotswapping, the passed `adapter_name` should be the name of an already loaded adapter.

  If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need
  to call an additional method before loading the adapter:

```py
pipeline = ...  # load diffusers pipeline
max_rank = ...  # the highest rank among all LoRAs that you want to load
# call *before* compiling and loading the LoRA adapter
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(file_name)
# optionally compile the model now
```

  Note that hotswapping adapters of the text encoder is not yet supported. There are some further
  limitations to this technique, which are documented here:
  https://huggingface.co/docs/peft/main/en/package_reference/hotswap
- **metadata** --
  LoRA adapter metadata. When supplied, the metadata inferred through the state dict isn't used to
  initialize `LoraConfig`.</paramsdesc><paramgroups>0</paramgroups></docstring>

Loads a LoRA adapter into the underlying model.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>save_lora_adapter</name><anchor>diffusers.loaders.PeftAdapterMixin.save_lora_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L380</source><parameters>[{"name": "save_directory", "val": ""}, {"name": "adapter_name", "val": ": str = 'default'"}, {"name": "upcast_before_saving", "val": ": bool = False"}, {"name": "safe_serialization", "val": ": bool = True"}, {"name": "weight_name", "val": ": typing.Optional[str] = None"}]</parameters><paramsdesc>- **save_directory** (`str` or `os.PathLike`) --
  Directory to save LoRA parameters to. Will be created if it doesn't exist.
- **adapter_name** -- (`str`, defaults to "default"): The name of the adapter to serialize. Useful when the
  underlying model has multiple adapters loaded.
- **upcast_before_saving** (`bool`, defaults to `False`) --
  Whether to cast the underlying model to `torch.float32` before serialization.
- **safe_serialization** (`bool`, *optional*, defaults to `True`) --
  Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
- **weight_name** -- (`str`, *optional*, defaults to `None`): Name of the file to serialize the state dict with.</paramsdesc><paramgroups>0</paramgroups></docstring>

Save the LoRA parameters corresponding to the underlying model.




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapter</name><anchor>diffusers.loaders.PeftAdapterMixin.set_adapter</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L542</source><parameters>[{"name": "adapter_name", "val": ": typing.Union[str, typing.List[str]]"}]</parameters><paramsdesc>- **adapter_name** (Union[str, List[str]])) --
  The list of adapters to set or the adapter name in the case of a single adapter.</paramsdesc><paramgroups>0</paramgroups></docstring>

Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters.

If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
[documentation](https://huggingface.co/docs/peft).




</div>
<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>set_adapters</name><anchor>diffusers.loaders.PeftAdapterMixin.set_adapters</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/peft.py#L448</source><parameters>[{"name": "adapter_names", "val": ": typing.Union[typing.List[str], str]"}, {"name": "weights", "val": ": typing.Union[float, typing.Dict, typing.List[float], typing.List[typing.Dict], typing.List[NoneType], NoneType] = None"}]</parameters><paramsdesc>- **adapter_names** (`List[str]` or `str`) --
  The names of the adapters to use.
- **adapter_weights** (`Union[List[float], float]`, *optional*) --
  The adapter(s) weights to use with the UNet. If `None`, the weights are set to `1.0` for all the
  adapters.</paramsdesc><paramgroups>0</paramgroups></docstring>

Set the currently active adapters for use in the diffusion network (e.g. unet, transformer, etc.).



<ExampleCodeBlock anchor="diffusers.loaders.PeftAdapterMixin.set_adapters.example">

Example:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic"
)
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.unet.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5])
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/peft.md" />

### SD3Transformer2D
https://huggingface.co/docs/diffusers/main/api/loaders/transformer_sd3.md

# SD3Transformer2D

This class is useful when *only* loading weights into a [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel). If you need to load weights into the text encoder or a text encoder and SD3Transformer2DModel, check [`SD3LoraLoaderMixin`](lora#diffusers.loaders.SD3LoraLoaderMixin) class instead.

The `SD3Transformer2DLoadersMixin` class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs.

> [!TIP]
> To learn more about how to load LoRA weights, see the [LoRA](../../tutorials/using_peft_for_inference) loading guide.

## SD3Transformer2DLoadersMixin[[diffusers.loaders.SD3Transformer2DLoadersMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.SD3Transformer2DLoadersMixin</name><anchor>diffusers.loaders.SD3Transformer2DLoadersMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/transformer_sd3.py#L28</source><parameters>[]</parameters></docstring>
Load IP-Adapters and LoRA layers into a `[SD3Transformer2DModel]`.


<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>_load_ip_adapter_weights</name><anchor>diffusers.loaders.SD3Transformer2DLoadersMixin._load_ip_adapter_weights</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/transformer_sd3.py#L158</source><parameters>[{"name": "state_dict", "val": ": typing.Dict"}, {"name": "low_cpu_mem_usage", "val": ": bool = True"}]</parameters><paramsdesc>- **state_dict** (`Dict`) --
  State dict with keys "ip_adapter", which contains parameters for attention processors, and
  "image_proj", which contains parameters for image projection net.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`) --
  Speed up model loading only loading the pretrained weights and not initializing the weights. This also
  tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
  Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
  argument to `True` will raise an error.</paramsdesc><paramgroups>0</paramgroups></docstring>
Sets IP-Adapter attention processors, image projection, and loads state_dict.




</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/transformer_sd3.md" />

### Single files
https://huggingface.co/docs/diffusers/main/api/loaders/single_file.md

# Single files

The [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method allows you to load:

* a model stored in a single file, which is useful if you're working with models from the diffusion ecosystem, like Automatic1111, and commonly rely on a single-file layout to store and share models
* a model stored in their originally distributed layout, which is useful if you're working with models finetuned with other services, and want to load it directly into Diffusers model objects and pipelines

> [!TIP]
> Read the [Model files and layouts](../../using-diffusers/other-formats) guide to learn more about the Diffusers-multifolder layout versus the single-file layout, and how to load models stored in these different layouts.

## Supported pipelines

- [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline)
- [StableDiffusionImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/img2img#diffusers.StableDiffusionImg2ImgPipeline)
- [StableDiffusionInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/inpaint#diffusers.StableDiffusionInpaintPipeline)
- [StableDiffusionControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline)
- [StableDiffusionControlNetImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetImg2ImgPipeline)
- [StableDiffusionControlNetInpaintPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetInpaintPipeline)
- [StableDiffusionUpscalePipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/upscale#diffusers.StableDiffusionUpscalePipeline)
- [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline)
- [StableDiffusionXLImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline)
- [StableDiffusionXLInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline)
- [StableDiffusionXLInstructPix2PixPipeline](/docs/diffusers/main/en/api/pipelines/pix2pix#diffusers.StableDiffusionXLInstructPix2PixPipeline)
- [StableDiffusionXLControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetPipeline)
- [StableDiffusionXLKDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/k_diffusion#diffusers.StableDiffusionXLKDiffusionPipeline)
- [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline)
- [LatentConsistencyModelPipeline](/docs/diffusers/main/en/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelPipeline)
- [LatentConsistencyModelImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelImg2ImgPipeline)
- [StableDiffusionControlNetXSPipeline](/docs/diffusers/main/en/api/pipelines/controlnetxs#diffusers.StableDiffusionControlNetXSPipeline)
- [StableDiffusionXLControlNetXSPipeline](/docs/diffusers/main/en/api/pipelines/controlnetxs_sdxl#diffusers.StableDiffusionXLControlNetXSPipeline)
- [LEditsPPPipelineStableDiffusion](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.LEditsPPPipelineStableDiffusion)
- [LEditsPPPipelineStableDiffusionXL](/docs/diffusers/main/en/api/pipelines/ledits_pp#diffusers.LEditsPPPipelineStableDiffusionXL)
- [PIAPipeline](/docs/diffusers/main/en/api/pipelines/pia#diffusers.PIAPipeline)

## Supported models

- [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)
- `StableCascadeUNet`
- [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL)
- [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel)
- [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)
- [FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel)

## FromSingleFileMixin[[diffusers.loaders.FromSingleFileMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.FromSingleFileMixin</name><anchor>diffusers.loaders.FromSingleFileMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file.py#L266</source><parameters>[]</parameters></docstring>

Load model weights saved in the `.ckpt` format into a [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline).



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_single_file</name><anchor>diffusers.loaders.FromSingleFileMixin.from_single_file</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file.py#L271</source><parameters>[{"name": "pretrained_model_link_or_path", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_link_or_path** (`str` or `os.PathLike`, *optional*) --
  Can be either:
  - A link to the `.ckpt` file (for example
    `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"`) on the Hub.
  - A path to a *file* containing all pipeline weights.
- **torch_dtype** (`str` or `torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to `True`, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **original_config_file** (`str`, *optional*) --
  The path to the original config file that was used to train the model. If not provided, the config file
  will be inferred from the checkpoint file.
- **config** (`str`, *optional*) --
  Can be either:
  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline
    hosted on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing the pipeline
    component configs in Diffusers format.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
  class). The overwritten components are passed directly to the pipelines `__init__` method. See example
  below for more information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) from pretrained pipeline weights saved in the `.ckpt` or `.safetensors`
format. The pipeline is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.loaders.FromSingleFileMixin.from_single_file.example">

Examples:

```py
>>> from diffusers import StableDiffusionPipeline

>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )

>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly.ckpt")

>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
...     "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
...     torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
```

</ExampleCodeBlock>



</div></div>

## FromOriginalModelMixin[[diffusers.loaders.FromOriginalModelMixin]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>class diffusers.loaders.FromOriginalModelMixin</name><anchor>diffusers.loaders.FromOriginalModelMixin</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file_model.py#L194</source><parameters>[]</parameters></docstring>

Load pretrained weights saved in the `.ckpt` or `.safetensors` format into a model.



<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>from_single_file</name><anchor>diffusers.loaders.FromOriginalModelMixin.from_single_file</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders/single_file_model.py#L199</source><parameters>[{"name": "pretrained_model_link_or_path_or_dict", "val": ": typing.Optional[str] = None"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **pretrained_model_link_or_path_or_dict** (`str`, *optional*) --
  Can be either:
  - A link to the `.safetensors` or `.ckpt` file (for example
    `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.safetensors"`) on the Hub.
  - A path to a local *file* containing the weights of the component model.
  - A state dict containing the component model weights.
- **config** (`str`, *optional*) --
  - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted
    on the Hub.
  - A path to a *directory* (for example `./my_pipeline_directory/`) containing the pipeline component
    configs in Diffusers format.
- **subfolder** (`str`, *optional*, defaults to `""`) --
  The subfolder location of a model file within a larger model repository on the Hub or locally.
- **original_config** (`str`, *optional*) --
  Dict or path to a yaml file containing the configuration for the model in its original format.
  If a dict is provided, it will be used to initialize the model configuration.
- **torch_dtype** (`torch.dtype`, *optional*) --
  Override the default `torch.dtype` and load the model with another dtype.
- **force_download** (`bool`, *optional*, defaults to `False`) --
  Whether or not to force the (re-)download of the model weights and configuration files, overriding the
  cached versions if they exist.
- **cache_dir** (`Union[str, os.PathLike]`, *optional*) --
  Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
  is not used.

- **proxies** (`Dict[str, str]`, *optional*) --
  A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
  'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
- **local_files_only** (`bool`, *optional*, defaults to `False`) --
  Whether to only load local model weights and configuration files or not. If set to True, the model
  won't be downloaded from the Hub.
- **token** (`str` or *bool*, *optional*) --
  The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
  `diffusers-cli login` (stored in `~/.huggingface`) is used.
- **revision** (`str`, *optional*, defaults to `"main"`) --
  The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
  allowed by Git.
- **low_cpu_mem_usage** (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 and --
  is_accelerate_available() else `False`): Speed up model loading only loading the pretrained weights and
  not initializing the weights. This also tries to not use more than 1x model size in CPU memory
  (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using
  an older version of PyTorch, setting this argument to `True` will raise an error.
- **disable_mmap** ('bool', *optional*, defaults to 'False') --
  Whether to disable mmap when loading a Safetensors model. This option can perform better when the model
  is on a network mount or hard drive, which may not handle the seeky-ness of mmap very well.
- **kwargs** (remaining dictionary of keyword arguments, *optional*) --
  Can be used to overwrite load and saveable variables (for example the pipeline components of the
  specific pipeline class). The overwritten components are directly passed to the pipelines `__init__`
  method. See example below for more information.</paramsdesc><paramgroups>0</paramgroups></docstring>

Instantiate a model from pretrained weights saved in the original `.ckpt` or `.safetensors` format. The model
is set in evaluation mode (`model.eval()`) by default.



<ExampleCodeBlock anchor="diffusers.loaders.FromOriginalModelMixin.from_single_file.example">

```py
>>> from diffusers import StableCascadeUNet

>>> ckpt_path = "https://huggingface.co/stabilityai/stable-cascade/blob/main/stage_b_lite.safetensors"
>>> model = StableCascadeUNet.from_single_file(ckpt_path)
```

</ExampleCodeBlock>


</div></div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/loaders/single_file.md" />

### DreamBooth
https://huggingface.co/docs/diffusers/main/training/dreambooth.md

# DreamBooth

[DreamBooth](https://huggingface.co/papers/2208.12242) is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images.

If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers).

This guide will explore the [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Navigate to the example folder with the training script and install the required dependencies for the script you're using:

```bash
cd examples/dreambooth
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) and let us know if you have any questions or concerns.

## Script parameters

> [!WARNING]
> DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the [Training Stable Diffusion with Dreambooth using 🧨 Diffusers](https://huggingface.co/blog/dreambooth) blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters.

The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L228) function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you'd like.

For example, to train in the bf16 format:

```bash
accelerate launch train_dreambooth.py \
    --mixed_precision="bf16"
```

Some basic and important parameters to know and specify are:

- `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model
- `--instance_data_dir`: path to a folder containing the training dataset (example images)
- `--instance_prompt`: the text prompt that contains the special word for the example images
- `--train_text_encoder`: whether to also train the text encoder
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command

### Min-SNR weighting

The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch.

Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:

```bash
accelerate launch train_dreambooth.py \
  --snr_gamma=5.0
```

### Prior preservation loss

Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions.

- `--with_prior_preservation`: whether to use prior preservation loss
- `--prior_loss_weight`: controls the influence of the prior preservation loss on the model
- `--class_data_dir`: path to a folder containing the generated class sample images
- `--class_prompt`: the text prompt describing the class of the generated sample images

```bash
accelerate launch train_dreambooth.py \
  --with_prior_preservation \
  --prior_loss_weight=1.0 \
  --class_data_dir="path/to/class/images" \
  --class_prompt="text prompt describing class"
```

### Train text encoder

To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you'll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by:

```bash
accelerate launch train_dreambooth.py \
  --train_text_encoder
```

## Training script

DreamBooth comes with its own dataset classes:

- [`DreamBoothDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L604): preprocesses the images and class images, and tokenizes the prompts for training
- [`PromptDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L738): generates the prompt embeddings to generate the class images

If you enabled [prior preservation loss](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L842), the class images are generated here:

```py
sample_dataset = PromptDataset(args.class_prompt, num_new_images)
sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)

sample_dataloader = accelerator.prepare(sample_dataloader)
pipeline.to(accelerator.device)

for example in tqdm(
    sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
):
    images = pipeline(example["prompt"]).images
```

Next is the [`main()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L799) function which handles setting up the dataset for training and the training loop itself. The script loads the [tokenizer](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L898), [scheduler and models](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L912C1-L912C1):

```py
# Load the tokenizer
if args.tokenizer_name:
    tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False)
elif args.pretrained_model_name_or_path:
    tokenizer = AutoTokenizer.from_pretrained(
        args.pretrained_model_name_or_path,
        subfolder="tokenizer",
        revision=args.revision,
        use_fast=False,
    )

# Load scheduler and models
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
text_encoder = text_encoder_cls.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
)

if model_has_vae(args):
    vae = AutoencoderKL.from_pretrained(
        args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision
    )
else:
    vae = None

unet = UNet2DConditionModel.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
)
```

Then, it's time to [create the training dataset](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1073) and DataLoader from `DreamBoothDataset`:

```py
train_dataset = DreamBoothDataset(
    instance_data_root=args.instance_data_dir,
    instance_prompt=args.instance_prompt,
    class_data_root=args.class_data_dir if args.with_prior_preservation else None,
    class_prompt=args.class_prompt,
    class_num=args.num_class_images,
    tokenizer=tokenizer,
    size=args.resolution,
    center_crop=args.center_crop,
    encoder_hidden_states=pre_computed_encoder_hidden_states,
    class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states,
    tokenizer_max_length=args.tokenizer_max_length,
)

train_dataloader = torch.utils.data.DataLoader(
    train_dataset,
    batch_size=args.train_batch_size,
    shuffle=True,
    collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation),
    num_workers=args.dataloader_num_workers,
)
```

Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1151) takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss.

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

You're now ready to launch the training script! 🚀

For this guide, you'll download some images of a [dog](https://huggingface.co/datasets/diffusers/dog-example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).

```py
from huggingface_hub import snapshot_download

local_dir = "./dog"
snapshot_download(
    "diffusers/dog-example",
    local_dir=local_dir,
    repo_type="dataset",
    ignore_patterns=".gitattributes",
)
```

Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR` to the path where you just downloaded the dog images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `sks` as the special word to tie the training to.

If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command:

```bash
--validation_prompt="a photo of a sks dog"
--num_validation_images=4
--validation_steps=100
```

One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth.

<hfoptions id="gpu-select">
<hfoption id="16GB">

On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes:

```py
pip install bitsandbytes
```

Then, add the following parameter to your training command:

```bash
accelerate launch train_dreambooth.py \
  --gradient_checkpointing \
  --use_8bit_adam \
```

</hfoption>
<hfoption id="12GB">

On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage.

```bash
accelerate launch train_dreambooth.py \
  --use_8bit_adam \
  --gradient_checkpointing \
  --enable_xformers_memory_efficient_attention \
  --set_grads_to_none \
```

</hfoption>
<hfoption id="8GB">

On a 8GB GPU, you'll need [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory.

Run the following command to configure your 🤗 Accelerate environment:

```bash
accelerate config
```

During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options.

You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch.

bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment.

That's it! You don't need to add any additional parameters to your training command.

</hfoption>
</hfoptions>

```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export INSTANCE_DIR="./dog"
export OUTPUT_DIR="path_to_saved_model"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --instance_prompt="a photo of sks dog" \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=1 \
  --learning_rate=5e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=400 \
  --push_to_hub
```

Once training is complete, you can use your newly trained model for inference!

> [!TIP]
> Can't wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed.
>
> ```py
> from diffusers import DiffusionPipeline, UNet2DConditionModel
> from transformers import CLIPTextModel
> import torch
>
> unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet")
>
> # if you have trained with `--args.train_text_encoder` make sure to also load the text encoder
> text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder")
>
> pipeline = DiffusionPipeline.from_pretrained(
>     "stable-diffusion-v1-5/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16,
> ).to("cuda")
>
> image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0]
> image.save("dog-bucket.png")
> ```

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("dog-bucket.png")
```

## LoRA

LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) script to train with LoRA.

The LoRA training script is discussed in more detail in the [LoRA training](lora) guide.

## Stable Diffusion XL

Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_dreambooth_lora_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py) script to train a SDXL model with LoRA.

The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.

## DeepFloyd IF

DeepFloyd IF is a cascading pixel diffusion model with three stages. The first stage generates a base image and the second and third stages progressively upscales the base image into a high-resolution 1024x1024 image. Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) or [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) scripts to train a DeepFloyd IF model with LoRA or the full model.

DeepFloyd IF uses predicted variance, but the Diffusers training scripts uses predicted error so the trained DeepFloyd IF models are switched to a fixed variance schedule. The training scripts will update the scheduler config of the fully trained model for you. However, when you load the saved LoRA weights you must also update the pipeline's scheduler config.

```py
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", use_safetensors=True)

pipe.load_lora_weights("<lora weights path>")

# Update scheduler config to fixed variance schedule
pipe.scheduler = pipe.scheduler.__class__.from_config(pipe.scheduler.config, variance_type="fixed_small")
```

The stage 2 model requires additional validation images to upscale. You can download and use a downsized version of the training images for this.

```py
from huggingface_hub import snapshot_download

local_dir = "./dog_downsized"
snapshot_download(
    "diffusers/dog-example-downsized",
    local_dir=local_dir,
    repo_type="dataset",
    ignore_patterns=".gitattributes",
)
```

The code samples below provide a brief overview of how to train a DeepFloyd IF model with a combination of DreamBooth and LoRA. Some important parameters to note are:

* `--resolution=64`, a much smaller resolution is required because DeepFloyd IF is a pixel diffusion model and to work on uncompressed pixels, the input images must be smaller
* `--pre_compute_text_embeddings`, compute the text embeddings ahead of time to save memory because the [T5Model](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5Model) can take up a lot of memory
* `--tokenizer_max_length=77`, you can use a longer default text length with T5 as the text encoder but the default model encoding procedure uses a shorter text length
* `--text_encoder_use_attention_mask`, to pass the attention mask to the text encoder

<hfoptions id="IF-DreamBooth">
<hfoption id="Stage 1 LoRA DreamBooth">

Training stage 1 of DeepFloyd IF with LoRA and DreamBooth requires ~28GB of memory.

```bash
export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_lora"

accelerate launch train_dreambooth_lora.py \
  --report_to wandb \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --instance_prompt="a sks dog" \
  --resolution=64 \
  --train_batch_size=4 \
  --gradient_accumulation_steps=1 \
  --learning_rate=5e-6 \
  --scale_lr \
  --max_train_steps=1200 \
  --validation_prompt="a sks dog" \
  --validation_epochs=25 \
  --checkpointing_steps=100 \
  --pre_compute_text_embeddings \
  --tokenizer_max_length=77 \
  --text_encoder_use_attention_mask
```

</hfoption>
<hfoption id="Stage 2 LoRA DreamBooth">

For stage 2 of DeepFloyd IF with LoRA and DreamBooth, pay attention to these parameters:

* `--validation_images`, the images to upscale during validation
* `--class_labels_conditioning=timesteps`, to additionally conditional the UNet as needed in stage 2
* `--learning_rate=1e-6`, a lower learning rate is used compared to stage 1
* `--resolution=256`, the expected resolution for the upscaler

```bash
export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_upscale"
export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png"

python train_dreambooth_lora.py \
    --report_to wandb \
    --pretrained_model_name_or_path=$MODEL_NAME \
    --instance_data_dir=$INSTANCE_DIR \
    --output_dir=$OUTPUT_DIR \
    --instance_prompt="a sks dog" \
    --resolution=256 \
    --train_batch_size=4 \
    --gradient_accumulation_steps=1 \
    --learning_rate=1e-6 \
    --max_train_steps=2000 \
    --validation_prompt="a sks dog" \
    --validation_epochs=100 \
    --checkpointing_steps=500 \
    --pre_compute_text_embeddings \
    --tokenizer_max_length=77 \
    --text_encoder_use_attention_mask \
    --validation_images $VALIDATION_IMAGES \
    --class_labels_conditioning=timesteps
```

</hfoption>
<hfoption id="Stage 1 DreamBooth">

For stage 1 of DeepFloyd IF with DreamBooth, pay attention to these parameters:

* `--skip_save_text_encoder`, to skip saving the full T5 text encoder with the finetuned model
* `--use_8bit_adam`, to use 8-bit Adam optimizer to save memory due to the size of the optimizer state when training the full model
* `--learning_rate=1e-7`, a really low learning rate should be used for full model training otherwise the model quality is degraded (you can use a higher learning rate with a larger batch size)

Training with 8-bit Adam and a batch size of 4, the full model can be trained with ~48GB of memory.

```bash
export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_if"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --instance_prompt="a photo of sks dog" \
  --resolution=64 \
  --train_batch_size=4 \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-7 \
  --max_train_steps=150 \
  --validation_prompt "a photo of sks dog" \
  --validation_steps 25 \
  --text_encoder_use_attention_mask \
  --tokenizer_max_length 77 \
  --pre_compute_text_embeddings \
  --use_8bit_adam \
  --set_grads_to_none \
  --skip_save_text_encoder \
  --push_to_hub
```

</hfoption>
<hfoption id="Stage 2 DreamBooth">

For stage 2 of DeepFloyd IF with DreamBooth, pay attention to these parameters:

* `--learning_rate=5e-6`, use a lower learning rate with a smaller effective batch size
* `--resolution=256`, the expected resolution for the upscaler
* `--train_batch_size=2` and `--gradient_accumulation_steps=6`, to effectively train on images with faces requires larger batch sizes

```bash
export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_upscale"
export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png"

accelerate launch train_dreambooth.py \
  --report_to wandb \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --instance_prompt="a sks dog" \
  --resolution=256 \
  --train_batch_size=2 \
  --gradient_accumulation_steps=6 \
  --learning_rate=5e-6 \
  --max_train_steps=2000 \
  --validation_prompt="a sks dog" \
  --validation_steps=150 \
  --checkpointing_steps=500 \
  --pre_compute_text_embeddings \
  --tokenizer_max_length=77 \
  --text_encoder_use_attention_mask \
  --validation_images $VALIDATION_IMAGES \
  --class_labels_conditioning timesteps \
  --push_to_hub
```

</hfoption>
</hfoptions>

### Training tips

Training the DeepFloyd IF model can be challenging, but here are some tips that we've found helpful:

- LoRA is sufficient for training the stage 1 model because the model's low resolution makes representing finer details difficult regardless.
- For common or simple objects, you don't necessarily need to finetune the upscaler. Make sure the prompt passed to the upscaler is adjusted to remove the new token from the instance prompt. For example, if your stage 1 prompt is "a sks dog" then your stage 2 prompt should be "a dog".
- For finer details like faces, fully training the stage 2 upscaler is better than training the stage 2 model with LoRA. It also helps to use lower learning rates with larger batch sizes.
- Lower learning rates should be used to train the stage 2 model.
- The [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) works better than the DPMSolver used in the training scripts.

## Next steps

Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful:

- Learn how to [load a DreamBooth](../using-diffusers/dreambooth) model for inference if you trained your model with LoRA.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/dreambooth.md" />

### LoRA
https://huggingface.co/docs/diffusers/main/training/lora.md

# LoRA

> [!WARNING]
> This is experimental and the API may change in the future.

[LoRA (Low-Rank Adaptation of Large Language Models)](https://hf.co/papers/2106.09685) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training.

> [!TIP]
> LoRA is very versatile and supported for [DreamBooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py), [Kandinsky 2.2](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_decoder.py), [Stable Diffusion XL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py), [text-to-image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py), and [Wuerstchen](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py).

This guide will explore the [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Navigate to the example folder with the training script and install the required dependencies for the script you're using:

```bash
cd examples/text_to_image
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) and let us know if you have any questions or concerns.

## Script parameters

The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L85) function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you'd like.

For example, to increase the number of epochs to train:

```bash
accelerate launch train_text_to_image_lora.py \
  --num_train_epochs=150 \
```

Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the LoRA relevant parameters:

- `--rank`: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters
- `--learning_rate`: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate

## Training script

The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L371) function, and if you need to adapt the training script, this is where you'll make your changes.

As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the LoRA relevant parts of the script.

<hfoptions id="lora">
<hfoption id="UNet">

Diffusers uses `~peft.LoraConfig` from the [PEFT](https://hf.co/docs/peft) library to set up the parameters of the LoRA adapter such as the rank, alpha, and which modules to insert the LoRA weights into. The adapter is added to the UNet, and only the LoRA layers are filtered for optimization in `lora_layers`.

```py
unet_lora_config = LoraConfig(
    r=args.rank,
    lora_alpha=args.rank,
    init_lora_weights="gaussian",
    target_modules=["to_k", "to_q", "to_v", "to_out.0"],
)

unet.add_adapter(unet_lora_config)
lora_layers = filter(lambda p: p.requires_grad, unet.parameters())
```

</hfoption>
<hfoption id="text encoder">

Diffusers also supports finetuning the text encoder with LoRA from the [PEFT](https://hf.co/docs/peft) library when necessary such as finetuning Stable Diffusion XL (SDXL). The `~peft.LoraConfig` is used to configure the parameters of the LoRA adapter which are then added to the text encoder, and only the LoRA layers are filtered for training.

```py
text_lora_config = LoraConfig(
    r=args.rank,
    lora_alpha=args.rank,
    init_lora_weights="gaussian",
    target_modules=["q_proj", "k_proj", "v_proj", "out_proj"],
)

text_encoder_one.add_adapter(text_lora_config)
text_encoder_two.add_adapter(text_lora_config)
text_lora_parameters_one = list(filter(lambda p: p.requires_grad, text_encoder_one.parameters()))
text_lora_parameters_two = list(filter(lambda p: p.requires_grad, text_encoder_two.parameters()))
```

</hfoption>
</hfoptions>

The [optimizer](https://github.com/huggingface/diffusers/blob/e4b8f173b97731686e290b2eb98e7f5df2b1b322/examples/text_to_image/train_text_to_image_lora.py#L529) is initialized with the `lora_layers` because these are the only weights that'll be optimized:

```py
optimizer = optimizer_cls(
    lora_layers,
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py!

## Launch the script

Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀

Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:

- saved model checkpoints
- `pytorch_lora_weights.safetensors` (the trained LoRA weights)

If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.

> [!WARNING]
> A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM.

```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export OUTPUT_DIR="/sddata/finetune/lora/naruto"
export HUB_MODEL_ID="naruto-lora"
export DATASET_NAME="lambdalabs/naruto-blip-captions"

accelerate launch --mixed_precision="fp16"  train_text_to_image_lora.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --dataset_name=$DATASET_NAME \
  --dataloader_num_workers=8 \
  --resolution=512 \
  --center_crop \
  --random_flip \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --max_train_steps=15000 \
  --learning_rate=1e-04 \
  --max_grad_norm=1 \
  --lr_scheduler="cosine" \
  --lr_warmup_steps=0 \
  --output_dir=${OUTPUT_DIR} \
  --push_to_hub \
  --hub_model_id=${HUB_MODEL_ID} \
  --report_to=wandb \
  --checkpointing_steps=500 \
  --validation_prompt="A naruto with blue eyes." \
  --seed=1337
```

Once training has been completed, you can use your model for inference:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors")
image = pipeline("A naruto with blue eyes").images[0]
```

## Next steps

Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful:

- Learn how to [load different LoRA formats](../tutorials/using_peft_for_inference) trained using community trainers like Kohya and TheLastBen.
- Learn how to use and [combine multiple LoRA's](../tutorials/using_peft_for_inference) with PEFT for inference.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/lora.md" />

### Distributed inference
https://huggingface.co/docs/diffusers/main/training/distributed_inference.md

# Distributed inference

Distributed inference splits the workload across multiple GPUs. It a useful technique for fitting larger models in memory and can process multiple prompts for higher throughput.

This guide will show you how to use [Accelerate](https://huggingface.co/docs/accelerate/index) and [PyTorch Distributed](https://pytorch.org/tutorials/beginner/dist_overview.html) for distributed inference.

## Accelerate

Accelerate is a library designed to simplify inference and training on multiple accelerators by handling the setup, allowing users to focus on their PyTorch code.

Install Accelerate with the following command.

```bash
uv pip install accelerate
```

Initialize a [accelerate.PartialState](https://huggingface.co/docs/accelerate/main/en/package_reference/state#accelerate.PartialState) class in a Python file to create a distributed environment. The [accelerate.PartialState](https://huggingface.co/docs/accelerate/main/en/package_reference/state#accelerate.PartialState) class manages process management, device control and distribution, and process coordination.

Move the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) to `accelerate.PartialState.device` to assign a GPU to each process.

```py
import torch
from accelerate import PartialState
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "Qwen/Qwen-Image", torch_dtype=torch.float16
)
distributed_state = PartialState()
pipeline.to(distributed_state.device)
```

Use the [split_between_processes](https://huggingface.co/docs/accelerate/main/en/package_reference/state#accelerate.PartialState.split_between_processes) utility as a context manager to automatically distribute the prompts between the number of processes.

```py
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
    result = pipeline(prompt).images[0]
    result.save(f"result_{distributed_state.process_index}.png")
```

Call `accelerate launch` to run the script and use the `--num_processes` argument to set the number of GPUs to use.

```bash
accelerate launch run_distributed.py --num_processes=2
```

> [!TIP]
> Refer to this minimal example [script](https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01ba) for running inference across multiple GPUs. To learn more, take a look at the [Distributed Inference with 🤗 Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide.

## PyTorch Distributed

PyTorch [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) enables [data parallelism](https://huggingface.co/spaces/nanotron/ultrascale-playbook?section=data_parallelism), which replicates the same model on each device, to process different batches of data in parallel.

Import `torch.distributed` and `torch.multiprocessing` into a Python file to set up the distributed process group and to spawn the processes for inference on each GPU.

```py
import torch
import torch.distributed as dist
import torch.multiprocessing as mp

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "Qwen/Qwen-Image", torch_dtype=torch.float16,
)
```

Create a function for inference with [init_process_group](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group). This method creates a distributed environment with the backend type, the `rank` of the current process, and the `world_size` or number of processes participating (for example, 2 GPUs would be `world_size=2`).

Move the pipeline to `rank` and use `get_rank` to assign a GPU to each process. Each process handles a different prompt.

```py
def run_inference(rank, world_size):
    dist.init_process_group("nccl", rank=rank, world_size=world_size)

    pipeline.to(rank)

    if torch.distributed.get_rank() == 0:
        prompt = "a dog"
    elif torch.distributed.get_rank() == 1:
        prompt = "a cat"

    image = sd(prompt).images[0]
    image.save(f"./{'_'.join(prompt)}.png")
```

Use [mp.spawn](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) to create the number of processes defined in `world_size`.

```py
def main():
    world_size = 2
    mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True)


if __name__ == "__main__":
    main()
```

Call `torchrun` to run the inference script and use the `--nproc_per_node` argument to set the number of GPUs to use.

```bash
torchrun run_distributed.py --nproc_per_node=2
```

## device_map

The `device_map` argument enables distributed inference by automatically placing model components on separate GPUs. This is especially useful when a model doesn't fit on a single GPU. You can use `device_map` to selectively load and unload the required model components at a given stage as shown in the example below (assumes two GPUs are available).

Set `device_map="balanced"` to evenly distributes the text encoders on all available GPUs. You can use the `max_memory` argument to allocate a maximum amount of memory for each text encoder. Don't load any other pipeline components to avoid memory usage.

```py
from diffusers import FluxPipeline
import torch

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=None,
    vae=None,
    device_map="balanced",
    max_memory={0: "16GB", 1: "16GB"},
    torch_dtype=torch.bfloat16
)
with torch.no_grad():
    print("Encoding prompts.")
    prompt_embeds, pooled_prompt_embeds, text_ids = pipeline.encode_prompt(
        prompt=prompt, prompt_2=None, max_sequence_length=512
    )
```

After the text embeddings are computed, remove them from the GPU to make space for the diffusion transformer.

```py
import gc 

def flush():
    gc.collect()
    torch.cuda.empty_cache()
    torch.cuda.reset_max_memory_allocated()
    torch.cuda.reset_peak_memory_stats()

del pipeline.text_encoder
del pipeline.text_encoder_2
del pipeline.tokenizer
del pipeline.tokenizer_2
del pipeline

flush()
```

Set `device_map="auto"` to automatically distribute the model on the two GPUs. This strategy places a model on the fastest device first before placing a model on a slower device like a CPU or hard drive if needed. The trade-off of storing model parameters on slower devices is slower inference latency.

```py
from diffusers import AutoModel
import torch 

transformer = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev", 
    subfolder="transformer",
    device_map="auto",
    torch_dtype=torch.bfloat16
)
```

> [!TIP]
> Run `pipeline.hf_device_map` to see how the various models are distributed across devices. This is useful for tracking model device placement. You can also call `hf_device_map` on the transformer model to see how it is distributed.

Add the transformer model to the pipeline and set the `output_type="latent"` to generate the latents.

```py
pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    text_encoder=None,
    text_encoder_2=None,
    tokenizer=None,
    tokenizer_2=None,
    vae=None,
    transformer=transformer,
    torch_dtype=torch.bfloat16
)

print("Running denoising.")
height, width = 768, 1360
latents = pipeline(
    prompt_embeds=prompt_embeds,
    pooled_prompt_embeds=pooled_prompt_embeds,
    num_inference_steps=50,
    guidance_scale=3.5,
    height=height,
    width=width,
    output_type="latent",
).images
```

Remove the pipeline and transformer from memory and load a VAE to decode the latents. The VAE is typically small enough to be loaded on a single device.

```py
import torch
from diffusers import AutoencoderKL
from diffusers.image_processor import VaeImageProcessor

vae = AutoencoderKL.from_pretrained(ckpt_id, subfolder="vae", torch_dtype=torch.bfloat16).to("cuda")
vae_scale_factor = 2 ** (len(vae.config.block_out_channels) - 1)
image_processor = VaeImageProcessor(vae_scale_factor=vae_scale_factor)

with torch.no_grad():
    print("Running decoding.")
    latents = FluxPipeline._unpack_latents(latents, height, width, vae_scale_factor)
    latents = (latents / vae.config.scaling_factor) + vae.config.shift_factor

    image = vae.decode(latents, return_dict=False)[0]
    image = image_processor.postprocess(image, output_type="pil")
    image[0].save("split_transformer.png")
```

By selectively loading and unloading the models you need at a given stage and sharding the largest models across multiple GPUs, it is possible to run inference with large models on consumer GPUs.

## Context parallelism

[Context parallelism](https://huggingface.co/spaces/nanotron/ultrascale-playbook?section=context_parallelism) splits input sequences across multiple GPUs to reduce memory usage. Each GPU processes its own slice of the sequence.

Use [set_attention_backend()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.set_attention_backend) to switch to a more optimized attention backend. Refer to this [table](../optimization/attention_backends#available-backends) for a complete list of available backends.

### Ring Attention

Key (K) and value (V) representations communicate between devices using [Ring Attention](https://huggingface.co/papers/2310.01889). This ensures each split sees every other token's K/V. Each GPU computes attention for its local K/V and passes it to the next GPU in the ring. No single GPU holds the full sequence, which reduces communication latency.

Pass a [ContextParallelConfig](/docs/diffusers/main/en/api/parallel#diffusers.ContextParallelConfig) to the `parallel_config` argument of the transformer model. The config supports the `ring_degree` argument that determines how many devices to use for Ring Attention.

```py
import torch
from diffusers import AutoModel, QwenImagePipeline, ContextParallelConfig

try:
    torch.distributed.init_process_group("nccl")
    rank = torch.distributed.get_rank()
    device = torch.device("cuda", rank % torch.cuda.device_count())
    torch.cuda.set_device(device)
    
    transformer = AutoModel.from_pretrained("Qwen/Qwen-Image", subfolder="transformer", torch_dtype=torch.bfloat16, parallel_config=ContextParallelConfig(ring_degree=2))
    pipeline = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", transformer=transformer, torch_dtype=torch.bfloat16, device_map="cuda")
    pipeline.transformer.set_attention_backend("flash")

    prompt = """
    cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
    highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
    """
    
    # Must specify generator so all ranks start with same latents (or pass your own)
    generator = torch.Generator().manual_seed(42)
    image = pipeline(prompt, num_inference_steps=50, generator=generator).images[0]
    
    if rank == 0:
        image.save("output.png")

except Exception as e:
    print(f"An error occurred: {e}")
    torch.distributed.breakpoint()
    raise

finally:
    if torch.distributed.is_initialized():
        torch.distributed.destroy_process_group()
```

### Ulysses Attention

[Ulysses Attention](https://huggingface.co/papers/2309.14509) splits a sequence across GPUs and performs an *all-to-all* communication (every device sends/receives data to every other device). Each GPU ends up with all tokens for only a subset of attention heads. Each GPU computes attention locally on all tokens for its head, then performs another all-to-all to regroup results by tokens for the next layer.

[ContextParallelConfig](/docs/diffusers/main/en/api/parallel#diffusers.ContextParallelConfig) supports Ulysses Attention through the `ulysses_degree` argument. This determines how many devices to use for Ulysses Attention.

Pass the [ContextParallelConfig](/docs/diffusers/main/en/api/parallel#diffusers.ContextParallelConfig) to `enable_parallelism()`.

```py
pipeline.transformer.enable_parallelism(config=ContextParallelConfig(ulysses_degree=2))
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/distributed_inference.md" />

### Overview
https://huggingface.co/docs/diffusers/main/training/overview.md

# Overview

🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in [diffusers/examples](https://github.com/huggingface/diffusers/tree/main/examples).

Each training script is:

- **Self-contained**: the training script does not depend on any local files, and all packages required to run the script are installed from the `requirements.txt` file.
- **Easy-to-tweak**: the training scripts are an example of how to train a diffusion model for a specific task and won't work out-of-the-box for every training scenario. You'll likely need to adapt the training script for your specific use-case. To help you with that, we've fully exposed the data preprocessing code and the training loop so you can modify it for your own use.
- **Beginner-friendly**: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out.
- **Single-purpose**: each training script is expressly designed for only one task to keep it readable and understandable.

Our current collection of training scripts include:

| Training | SDXL-support | LoRA-support |
|---|---|---|
| [unconditional image generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) |  |  |
| [text-to-image](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) | 👍 | 👍 |
| [textual inversion](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) |  |  |
| [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb) | 👍 | 👍 |
| [ControlNet](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) | 👍 |  |
| [InstructPix2Pix](https://github.com/huggingface/diffusers/tree/main/examples/instruct_pix2pix) | 👍 |  |
| [Custom Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion) |  |  |
| [T2I-Adapters](https://github.com/huggingface/diffusers/tree/main/examples/t2i_adapter) | 👍 |  |
| [Kandinsky 2.2](https://github.com/huggingface/diffusers/tree/main/examples/kandinsky2_2/text_to_image) |  | 👍 |
| [Wuerstchen](https://github.com/huggingface/diffusers/tree/main/examples/wuerstchen/text_to_image) |  | 👍 |

These examples are **actively** maintained, so please feel free to open an issue if they aren't working as expected. If you feel like another training example should be included, you're more than welcome to start a [Feature Request](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=) to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose.

## Install

Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the folder of the training script (for example, [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)) and install the `requirements.txt` file. Some training scripts have a specific requirement file for SDXL or LoRA. If you're using one of these scripts, make sure you install its corresponding requirements file.

```bash
cd examples/dreambooth
pip install -r requirements.txt
# to train SDXL with DreamBooth
pip install -r requirements_sdxl.txt
```

To speedup training and reduce memory-usage, we recommend:

- using PyTorch 2.0 or higher to automatically use [scaled dot product attention](../optimization/fp16#scaled-dot-product-attention) during training (you don't need to make any changes to the training code)
- installing [xFormers](../optimization/xformers) to enable memory-efficient attention

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/overview.md" />

### Kandinsky 2.2
https://huggingface.co/docs/diffusers/main/training/kandinsky.md

# Kandinsky 2.2

> [!WARNING]
> This script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset.

Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model's embeddings. That's why you'll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models.

Depending on your GPU, you may need to enable `gradient_checkpointing` (⚠️ not supported for the prior model!), `mixed_precision`, and `gradient_accumulation_steps` to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) (version [v0.0.16](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212) fails for training on some GPUs so you may need to install a development version instead).

This guide explores the [train_text_to_image_prior.py](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py) and the [train_text_to_image_decoder.py](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py) scripts to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the scripts, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/kandinsky2_2/text_to_image
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn't cover every aspect of the scripts in detail. If you're interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns.

## Script parameters

The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L190) function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_text_to_image_prior.py \
  --mixed_precision="fp16"
```

Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so let's get straight to a walkthrough of the Kandinsky training scripts!

### Min-SNR weighting

The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch.

Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:

```bash
accelerate launch train_text_to_image_prior.py \
  --snr_gamma=5.0
```

## Training script

The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts.

<hfoptions id="script">
<hfoption id="prior model">

The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L441) function contains the code for preparing the dataset and training the model.

One of the main differences you'll notice right away is that the training script also loads a [CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor) - in addition to a scheduler and tokenizer - for preprocessing images and a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model for encoding the images:

```py
noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample")
image_processor = CLIPImageProcessor.from_pretrained(
    args.pretrained_prior_model_name_or_path, subfolder="image_processor"
)
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer")

with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
    image_encoder = CLIPVisionModelWithProjection.from_pretrained(
        args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype
    ).eval()
    text_encoder = CLIPTextModelWithProjection.from_pretrained(
        args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype
    ).eval()
```

Kandinsky uses a [PriorTransformer](/docs/diffusers/main/en/api/models/prior_transformer#diffusers.PriorTransformer) to generate the image embeddings, so you'll want to setup the optimizer to learn the prior mode's parameters.

```py
prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior")
prior.train()
optimizer = optimizer_cls(
    prior.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Next, the input captions are tokenized, and images are [preprocessed](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L632) by the [CLIPImageProcessor](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPImageProcessor):

```py
def preprocess_train(examples):
    images = [image.convert("RGB") for image in examples[image_column]]
    examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values
    examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples)
    return examples
```

Finally, the [training loop](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L718) converts the input images into latents, adds noise to the image embeddings, and makes a prediction:

```py
model_pred = prior(
    noisy_latents,
    timestep=timesteps,
    proj_embedding=prompt_embeds,
    encoder_hidden_states=text_encoder_hidden_states,
    attention_mask=text_mask,
).predicted_image_embedding
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

</hfoption>
<hfoption id="decoder model">

The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L440) function contains the code for preparing the dataset and training the model.

Unlike the prior model, the decoder initializes a [VQModel](/docs/diffusers/main/en/api/models/vq#diffusers.VQModel) to decode the latents into images and it uses a [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel):

```py
with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
    vae = VQModel.from_pretrained(
        args.pretrained_decoder_model_name_or_path, subfolder="movq", torch_dtype=weight_dtype
    ).eval()
    image_encoder = CLIPVisionModelWithProjection.from_pretrained(
        args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype
    ).eval()
unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet")
```

Next, the script includes several image transforms and a [preprocessing](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L622) function for applying the transforms to the images and returning the pixel values:

```py
def preprocess_train(examples):
    images = [image.convert("RGB") for image in examples[image_column]]
    examples["pixel_values"] = [train_transforms(image) for image in images]
    examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values
    return examples
```

Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L706) handles converting the images to latents, adding noise, and predicting the noise residual.

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

```py
model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_kwargs).sample[:, :4]
```

</hfoption>
</hfoptions>

## Launch the script

Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀

You'll train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters, but you can also create and train on your own dataset by following the [Create a dataset for training](create_dataset) guide. Set the environment variable `DATASET_NAME` to the name of the dataset on the Hub or if you're training on your own files, set the environment variable `TRAIN_DIR` to a path to your dataset.

If you’re training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.

> [!TIP]
> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.

<hfoptions id="training-inference">
<hfoption id="prior model">

```bash
export DATASET_NAME="lambdalabs/naruto-blip-captions"

accelerate launch --mixed_precision="fp16"  train_text_to_image_prior.py \
  --dataset_name=$DATASET_NAME \
  --resolution=768 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --max_train_steps=15000 \
  --learning_rate=1e-05 \
  --max_grad_norm=1 \
  --checkpoints_total_limit=3 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --validation_prompts="A robot naruto, 4k photo" \
  --report_to="wandb" \
  --push_to_hub \
  --output_dir="kandi2-prior-naruto-model"
```

</hfoption>
<hfoption id="decoder model">

```bash
export DATASET_NAME="lambdalabs/naruto-blip-captions"

accelerate launch --mixed_precision="fp16"  train_text_to_image_decoder.py \
  --dataset_name=$DATASET_NAME \
  --resolution=768 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --gradient_checkpointing \
  --max_train_steps=15000 \
  --learning_rate=1e-05 \
  --max_grad_norm=1 \
  --checkpoints_total_limit=3 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --validation_prompts="A robot naruto, 4k photo" \
  --report_to="wandb" \
  --push_to_hub \
  --output_dir="kandi2-decoder-naruto-model"
```

</hfoption>
</hfoptions>

Once training is finished, you can use your newly trained model for inference!

<hfoptions id="training-inference">
<hfoption id="prior model">

```py
from diffusers import AutoPipelineForText2Image, DiffusionPipeline
import torch

prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16)
prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()}
pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16)

pipe.enable_model_cpu_offload()
prompt="A robot naruto, 4k photo"
image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0]
```

> [!TIP]
> Feel free to replace `kandinsky-community/kandinsky-2-2-decoder` with your own trained decoder checkpoint!

</hfoption>
<hfoption id="decoder model">

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

prompt="A robot naruto, 4k photo"
image = pipeline(prompt=prompt).images[0]
```

For the decoder model, you can also perform inference from a saved checkpoint which can be useful for viewing intermediate results. In this case, load the checkpoint into the UNet:

```py
from diffusers import AutoPipelineForText2Image, UNet2DConditionModel

unet = UNet2DConditionModel.from_pretrained("path/to/saved/model" + "/checkpoint-<N>/unet")

pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", unet=unet, torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

image = pipeline(prompt="A robot naruto, 4k photo").images[0]
```

</hfoption>
</hfoptions>

## Next steps

Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful:

- Read the [Kandinsky](../using-diffusers/kandinsky) guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet.
- Check out the [DreamBooth](dreambooth) and [LoRA](lora) training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined!


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/kandinsky.md" />

### Text-to-image
https://huggingface.co/docs/diffusers/main/training/text2image.md

# Text-to-image

> [!WARNING]
> The text-to-image script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset.

Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt.

Training a model can be taxing on your hardware, but if you enable `gradient_checkpointing` and `mixed_precision`, it is possible to train a model on a single 24GB GPU. If you're training with larger batch sizes or want to train faster, it's better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with [xFormers](../optimization/xformers).

This guide will explore the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/text_to_image
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

## Script parameters

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) and let us know if you have any questions or concerns.

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L193) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_text_to_image.py \
  --mixed_precision="fp16"
```

Some basic and important parameters include:

- `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model
- `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on
- `--image_column`: the name of the image column in the dataset to train on
- `--caption_column`: the name of the text column in the dataset to train on
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command

### Min-SNR weighting

The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch.

Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:

```bash
accelerate launch train_text_to_image.py \
  --snr_gamma=5.0
```

You can compare the loss surfaces for different `snr_gamma` values in this [Weights and Biases](https://wandb.ai/sayakpaul/text2image-finetune-minsnr) report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets.

## Training script

The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L490) function. If you need to adapt the training script, this is where you'll need to make your changes.

The `train_text_to_image` script starts by [loading a scheduler](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L543) and tokenizer. You can choose to use a different scheduler here if you want:

```py
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
tokenizer = CLIPTokenizer.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
)
```

Then the script [loads the UNet](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L619) model:

```py
load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
model.register_to_config(**load_model.config)

model.load_state_dict(load_model.state_dict())
```

Next, the text and image columns of the dataset need to be preprocessed. The [`tokenize_captions`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L724) function handles tokenizing the inputs, and the [`train_transforms`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L742) function specifies the type of transforms to apply to the image. Both of these functions are bundled into `preprocess_train`:

```py
def preprocess_train(examples):
    images = [image.convert("RGB") for image in examples[image_column]]
    examples["pixel_values"] = [train_transforms(image) for image in images]
    examples["input_ids"] = tokenize_captions(examples)
    return examples
```

Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L878) handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀

Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.

> [!TIP]
> To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment variables to the path of the dataset and where to save the model to.

```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export dataset_name="lambdalabs/naruto-blip-captions"

accelerate launch --mixed_precision="fp16"  train_text_to_image.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --dataset_name=$dataset_name \
  --use_ema \
  --resolution=512 --center_crop --random_flip \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --gradient_checkpointing \
  --max_train_steps=15000 \
  --learning_rate=1e-05 \
  --max_grad_norm=1 \
  --enable_xformers_memory_efficient_attention \
  --lr_scheduler="constant" --lr_warmup_steps=0 \
  --output_dir="sd-naruto-model" \
  --push_to_hub
```

Once training is complete, you can use your newly trained model for inference:

```py
from diffusers import StableDiffusionPipeline
import torch

pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda")

image = pipeline(prompt="yoda").images[0]
image.save("yoda-naruto.png")
```

## Next steps

Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful:

- Learn how to [load LoRA weights](../tutorials/using_peft_for_inference) for inference if you trained your model with LoRA.
- Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the [Text-to-image](../using-diffusers/conditional_image_generation) task guide.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/text2image.md" />

### Unconditional image generation
https://huggingface.co/docs/diffusers/main/training/unconditional_training.md

# Unconditional image generation

Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution.

This guide will explore the [train_unconditional.py](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies:

```bash
cd examples/unconditional_image_generation
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

## Script parameters

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) and let us know if you have any questions or concerns.

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L55) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the bf16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_unconditional.py \
  --mixed_precision="bf16"
```

Some basic and important parameters to specify include:

- `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command

Bring your dataset, and let the training script handle everything else!

## Training script

The code for preprocessing the dataset and the training loop is found in the [`main()`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L275) function. If you need to adapt the training script, this is where you'll need to make your changes.

The `train_unconditional` script [initializes a `UNet2DModel`](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L356) if you don't provide a model configuration. You can configure the UNet here if you'd like:

```py
model = UNet2DModel(
    sample_size=args.resolution,
    in_channels=3,
    out_channels=3,
    layers_per_block=2,
    block_out_channels=(128, 128, 256, 256, 512, 512),
    down_block_types=(
        "DownBlock2D",
        "DownBlock2D",
        "DownBlock2D",
        "DownBlock2D",
        "AttnDownBlock2D",
        "DownBlock2D",
    ),
    up_block_types=(
        "UpBlock2D",
        "AttnUpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
        "UpBlock2D",
    ),
)
```

Next, the script initializes a [scheduler](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L418) and [optimizer](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L429):

```py
# Initialize the scheduler
accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys())
if accepts_prediction_type:
    noise_scheduler = DDPMScheduler(
        num_train_timesteps=args.ddpm_num_steps,
        beta_schedule=args.ddpm_beta_schedule,
        prediction_type=args.prediction_type,
    )
else:
    noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule)

# Initialize the optimizer
optimizer = torch.optim.AdamW(
    model.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Then it [loads a dataset](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L451) and you can specify how to [preprocess](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L455) it:

```py
dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train")

augmentations = transforms.Compose(
    [
        transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
        transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
        transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5]),
    ]
)
```

Finally, the [training loop](https://github.com/huggingface/diffusers/blob/096f84b05f9514fae9f185cbec0a4d38fbad9919/examples/unconditional_image_generation/train_unconditional.py#L540) handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀

> [!WARNING]
> A full training run takes 2 hours on 4xV100 GPUs.

<hfoptions id="launchtraining">
<hfoption id="single GPU">

```bash
accelerate launch train_unconditional.py \
  --dataset_name="huggan/flowers-102-categories" \
  --output_dir="ddpm-ema-flowers-64" \
  --mixed_precision="fp16" \
  --push_to_hub
```

</hfoption>
<hfoption id="multi-GPU">

If you're training with more than one GPU, add the `--multi_gpu` parameter to the training command:

```bash
accelerate launch --multi_gpu train_unconditional.py \
  --dataset_name="huggan/flowers-102-categories" \
  --output_dir="ddpm-ema-flowers-64" \
  --mixed_precision="fp16" \
  --push_to_hub
```

</hfoption>
</hfoptions>

The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference:

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = pipeline().images[0]
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/unconditional_training.md" />

### Latent Consistency Distillation
https://huggingface.co/docs/diffusers/main/training/lcm_distill.md

# Latent Consistency Distillation

[Latent Consistency Models (LCMs)](https://hf.co/papers/2310.04378) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying *one-stage guided distillation* to the latent space, and incorporating a *skipping-step* method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details).

If you're training on a GPU with limited vRAM, try enabling `gradient_checkpointing`, `gradient_accumulation_steps`, and `mixed_precision` to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) and [bitsandbytes'](https://github.com/TimDettmers/bitsandbytes) 8-bit optimizer.

This guide will explore the [train_lcm_distill_sd_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sd_wds.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/consistency_distillation
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment (try enabling `torch.compile` to significantly speedup training):

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

## Script parameters

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sd_wds.py) and let us know if you have any questions or concerns.

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L419) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_lcm_distill_sd_wds.py \
  --mixed_precision="fp16"
```

Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so you'll focus on the parameters that are relevant to latent consistency distillation in this guide.

- `--pretrained_teacher_model`: the path to a pretrained latent diffusion model to use as the teacher model
- `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this [VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)) by madebyollin which works in fp16)
- `--w_min` and `--w_max`: the minimum and maximum guidance scale values for guidance scale sampling
- `--num_ddim_timesteps`: the number of timesteps for DDIM sampling
- `--loss_type`: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it's more robust to outliers
- `--huber_c`: the Huber loss parameter

## Training script

The training script starts by creating a dataset class - [`Text2ImageDataset`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L141) - for preprocessing the images and creating a training dataset.

```py
def transform(example):
    image = example["image"]
    image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR)

    c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution))
    image = TF.crop(image, c_top, c_left, resolution, resolution)
    image = TF.to_tensor(image)
    image = TF.normalize(image, [0.5], [0.5])

    example["image"] = image
    return example
```

For improved performance on reading and writing large datasets stored in the cloud, this script uses the [WebDataset](https://github.com/webdataset/webdataset) format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first.

```py
processing_pipeline = [
    wds.decode("pil", handler=wds.ignore_and_continue),
    wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue),
    wds.map(filter_keys({"image", "text"})),
    wds.map(transform),
    wds.to_tuple("image", "text"),
]
```

In the [`main()`](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L768) function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training.

```py
teacher_unet = UNet2DConditionModel.from_pretrained(
    args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision
)

unet = UNet2DConditionModel(**teacher_unet.config)
unet.load_state_dict(teacher_unet.state_dict(), strict=False)
unet.train()
```

Now you can create the [optimizer](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L979) to update the UNet parameters:

```py
optimizer = optimizer_class(
    unet.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Create the [dataset](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L994):

```py
dataset = Text2ImageDataset(
    train_shards_path_or_url=args.train_shards_path_or_url,
    num_train_examples=args.max_train_samples,
    per_gpu_batch_size=args.train_batch_size,
    global_batch_size=args.train_batch_size * accelerator.num_processes,
    num_workers=args.dataloader_num_workers,
    resolution=args.resolution,
    shuffle_buffer_size=1000,
    pin_memory=True,
    persistent_workers=True,
)
train_dataloader = dataset.train_dataloader
```

Next, you're ready to setup the [training loop](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1049) and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise.

```py
pred_x_0 = predicted_origin(
    noise_pred,
    start_timesteps,
    noisy_model_input,
    noise_scheduler.config.prediction_type,
    alpha_schedule,
    sigma_schedule,
)

model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0
```

It gets the [teacher model predictions](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1172) and the [LCM predictions](https://github.com/huggingface/diffusers/blob/3b37488fa3280aed6a95de044d7a42ffdcb565ef/examples/consistency_distillation/train_lcm_distill_sd_wds.py#L1209) next, calculates the loss, and then backpropagates it to the LCM.

```py
if args.loss_type == "l2":
    loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
elif args.loss_type == "huber":
    loss = torch.mean(
        torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c
    )
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers tutorial](../using-diffusers/write_own_pipeline) which breaks down the basic pattern of the denoising process.

## Launch the script

Now you're ready to launch the training script and start distilling!

For this guide, you'll use the `--train_shards_path_or_url` to specify the path to the [Conceptual Captions 12M](https://github.com/google-research-datasets/conceptual-12m) dataset stored on the Hub [here](https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset). Set the `MODEL_DIR` environment variable to the name of the teacher model and `OUTPUT_DIR` to where you want to save the model.

```bash
export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5"
export OUTPUT_DIR="path/to/saved/model"

accelerate launch train_lcm_distill_sd_wds.py \
    --pretrained_teacher_model=$MODEL_DIR \
    --output_dir=$OUTPUT_DIR \
    --mixed_precision=fp16 \
    --resolution=512 \
    --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \
    --max_train_steps=1000 \
    --max_train_samples=4000000 \
    --dataloader_num_workers=8 \
    --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \
    --validation_steps=200 \
    --checkpointing_steps=200 --checkpoints_total_limit=10 \
    --train_batch_size=12 \
    --gradient_checkpointing --enable_xformers_memory_efficient_attention \
    --gradient_accumulation_steps=1 \
    --use_8bit_adam \
    --resume_from_checkpoint=latest \
    --report_to=wandb \
    --seed=453645634 \
    --push_to_hub
```

Once training is complete, you can use your new LCM for inference.

```py
from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler
import torch

unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16")
pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16")

pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipeline.to("cuda")

prompt = "sushi rolls in the form of panda heads, sushi platter"

image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0]
```

## LoRA

LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_lcm_distill_lora_sd_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_lora_sd_wds.py) or [train_lcm_distill_lora_sdxl.wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_lora_sdxl_wds.py) script to train with LoRA.

The LoRA training script is discussed in more detail in the [LoRA training](lora) guide.

## Stable Diffusion XL

Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_lcm_distill_sdxl_wds.py](https://github.com/huggingface/diffusers/blob/main/examples/consistency_distillation/train_lcm_distill_sdxl_wds.py) script to train a SDXL model with LoRA.

The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.

## Next steps

Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful:

- Learn how to use [LCMs for inference](../using-diffusers/inference_with_lcm) for text-to-image, image-to-image, and with LoRA checkpoints.
- Read the [SDXL in 4 steps with Latent Consistency LoRAs](https://huggingface.co/blog/lcm_lora) blog post to learn more about SDXL LCM-LoRA's for super fast inference, quality comparisons, benchmarks, and more.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/lcm_distill.md" />

### Custom Diffusion
https://huggingface.co/docs/diffusers/main/training/custom_diffusion.md

# Custom Diffusion

[Custom Diffusion](https://huggingface.co/papers/2212.04488) is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time.

If you're training on a GPU with limited vRAM, you should try enabling xFormers with `--enable_xformers_memory_efficient_attention` for faster training with lower vRAM requirements (16GB). To save even more memory, add `--set_grads_to_none` in the training argument to set the gradients to `None` instead of zero (this option can cause some issues, so if you experience any, try removing this parameter).

This guide will explore the [train_custom_diffusion.py](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion/train_custom_diffusion.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Navigate to the example folder with the training script and install the required dependencies:

```bash
cd examples/custom_diffusion
pip install -r requirements.txt
pip install clip-retrieval
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion/train_custom_diffusion.py) and let us know if you have any questions or concerns.

## Script parameters

The training script contains all the parameters to help you customize your training run. These are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L319) function. The function comes with default values, but you can also set your own values in the training command if you'd like.

For example, to change the resolution of the input image:

```bash
accelerate launch train_custom_diffusion.py \
  --resolution=256
```

Many of the basic parameters are described in the [DreamBooth](dreambooth#script-parameters) training guide, so this guide focuses on the parameters unique to Custom Diffusion:

- `--freeze_model`: freezes the key and value parameters in the cross-attention layer; the default is `crossattn_kv`, but you can set it to `crossattn` to train all the parameters in the cross-attention layer
- `--concepts_list`: to learn multiple concepts, provide a path to a JSON file containing the concepts
- `--modifier_token`: a special word used to represent the learned concept
- `--initializer_token`: a special word used to initialize the embeddings of the `modifier_token`

### Prior preservation loss

Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions.

Many of the parameters for prior preservation loss are described in the [DreamBooth](dreambooth#prior-preservation-loss) training guide.

### Regularization

Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you're only training on a few images! Download 200 real images with `clip_retrieval`. The `class_prompt` should be the same category as the target images. These images are stored in `class_data_dir`.

```bash
python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200
```

To enable regularization, add the following parameters:

- `--with_prior_preservation`: whether to use prior preservation loss
- `--prior_loss_weight`: controls the influence of the prior preservation loss on the model
- `--real_prior`: whether to use a small set of real images to prevent overfitting

```bash
accelerate launch train_custom_diffusion.py \
  --with_prior_preservation \
  --prior_loss_weight=1.0 \
  --class_data_dir="./real_reg/samples_cat" \
  --class_prompt="cat" \
  --real_prior=True \
```

## Training script

> [!TIP]
> A lot of the code in the Custom Diffusion training script is similar to the [DreamBooth](dreambooth#training-script) script. This guide instead focuses on the code that is relevant to Custom Diffusion.

The Custom Diffusion training script has two dataset classes:

- [`CustomDiffusionDataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L165): preprocesses the images, class images, and prompts for training
- [`PromptDataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L148): prepares the prompts for generating class images

Next, the `modifier_token` is [added to the tokenizer](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L811), converted to token ids, and the token embeddings are resized to account for the new `modifier_token`. Then the `modifier_token` embeddings are initialized with the embeddings of the `initializer_token`. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts.

```py
params_to_freeze = itertools.chain(
    text_encoder.text_model.encoder.parameters(),
    text_encoder.text_model.final_layer_norm.parameters(),
    text_encoder.text_model.embeddings.position_embedding.parameters(),
)
freeze_params(params_to_freeze)
```

Now you'll need to add the [Custom Diffusion weights](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L911C3-L911C3) to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block.

```py
st = unet.state_dict()
for name, _ in unet.attn_processors.items():
    cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
    if name.startswith("mid_block"):
        hidden_size = unet.config.block_out_channels[-1]
    elif name.startswith("up_blocks"):
        block_id = int(name[len("up_blocks.")])
        hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
    elif name.startswith("down_blocks"):
        block_id = int(name[len("down_blocks.")])
        hidden_size = unet.config.block_out_channels[block_id]
    layer_name = name.split(".processor")[0]
    weights = {
        "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"],
        "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"],
    }
    if train_q_out:
        weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"]
        weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"]
        weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"]
    if cross_attention_dim is not None:
        custom_diffusion_attn_procs[name] = attention_class(
            train_kv=train_kv,
            train_q_out=train_q_out,
            hidden_size=hidden_size,
            cross_attention_dim=cross_attention_dim,
        ).to(unet.device)
        custom_diffusion_attn_procs[name].load_state_dict(weights)
    else:
        custom_diffusion_attn_procs[name] = attention_class(
            train_kv=False,
            train_q_out=False,
            hidden_size=hidden_size,
            cross_attention_dim=cross_attention_dim,
        )
del st
unet.set_attn_processor(custom_diffusion_attn_procs)
custom_diffusion_layers = AttnProcsLayers(unet.attn_processors)
```

The [optimizer](https://github.com/huggingface/diffusers/blob/84cd9e8d01adb47f046b1ee449fc76a0c32dc4e2/examples/custom_diffusion/train_custom_diffusion.py#L982) is initialized to update the cross-attention layer parameters:

```py
optimizer = optimizer_class(
    itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters())
    if args.modifier_token is not None
    else custom_diffusion_layers.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

In the [training loop](https://github.com/huggingface/diffusers/blob/84cd9e8d01adb47f046b1ee449fc76a0c32dc4e2/examples/custom_diffusion/train_custom_diffusion.py#L1048), it is important to only update the embeddings for the concept you're trying to learn. This means setting the gradients of all the other token embeddings to zero:

```py
if args.modifier_token is not None:
    if accelerator.num_processes > 1:
        grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad
    else:
        grads_text_encoder = text_encoder.get_input_embeddings().weight.grad
    index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0]
    for i in range(len(modifier_token_id[1:])):
        index_grads_to_zero = index_grads_to_zero & (
            torch.arange(len(tokenizer)) != modifier_token_id[i]
        )
    grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[
        index_grads_to_zero, :
    ].fill_(0)
```

## Launch the script

Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀

In this guide, you'll download and use these example [cat images](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip). You can also create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).

Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR`  to the path where you just downloaded the cat images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `<new1>` as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository.

To monitor training progress with Weights and Biases, add the `--report_to=wandb` parameter to the training command and specify a validation prompt with `--validation_prompt`. This is useful for debugging and saving intermediate results.

> [!TIP]
> If you're training on human faces, the Custom Diffusion team has found the following parameters to work well:
>
> - `--learning_rate=5e-6`
> - `--max_train_steps` can be anywhere between 1000 and 2000
> - `--freeze_model=crossattn`
> - use at least 15-20 images to train with

<hfoptions id="training-inference">
<hfoption id="single concept">

```bash
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export OUTPUT_DIR="path-to-save-model"
export INSTANCE_DIR="./data/cat"

accelerate launch train_custom_diffusion.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --class_data_dir=./real_reg/samples_cat/ \
  --with_prior_preservation \
  --real_prior \
  --prior_loss_weight=1.0 \
  --class_prompt="cat" \
  --num_class_images=200 \
  --instance_prompt="photo of a <new1> cat"  \
  --resolution=512  \
  --train_batch_size=2  \
  --learning_rate=1e-5  \
  --lr_warmup_steps=0 \
  --max_train_steps=250 \
  --scale_lr \
  --hflip  \
  --modifier_token "<new1>" \
  --validation_prompt="<new1> cat sitting in a bucket" \
  --report_to="wandb" \
  --push_to_hub
```

</hfoption>
<hfoption id="multiple concepts">

Custom Diffusion can also learn multiple concepts if you provide a [JSON](https://github.com/adobe-research/custom-diffusion/blob/main/assets/concept_list.json) file with some details about each concept it should learn.

Run clip-retrieval to collect some real images to use for regularization:

```bash
pip install clip-retrieval
python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200
```

Then you can launch the script:

```bash
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export OUTPUT_DIR="path-to-save-model"

accelerate launch train_custom_diffusion.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --output_dir=$OUTPUT_DIR \
  --concepts_list=./concept_list.json \
  --with_prior_preservation \
  --real_prior \
  --prior_loss_weight=1.0 \
  --resolution=512  \
  --train_batch_size=2  \
  --learning_rate=1e-5  \
  --lr_warmup_steps=0 \
  --max_train_steps=500 \
  --num_class_images=200 \
  --scale_lr \
  --hflip  \
  --modifier_token "<new1>+<new2>" \
  --push_to_hub
```

</hfoption>
</hfoptions>

Once training is finished, you can use your new Custom Diffusion model for inference.

<hfoptions id="training-inference">
<hfoption id="single concept">

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16,
).to("cuda")
pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.load_textual_inversion("path-to-save-model", weight_name="<new1>.bin")

image = pipeline(
    "<new1> cat sitting in a bucket",
    num_inference_steps=100,
    guidance_scale=6.0,
    eta=1.0,
).images[0]
image.save("cat.png")
```

</hfoption>
<hfoption id="multiple concepts">

```py
import torch
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16,
).to("cuda")
model_id = "sayakpaul/custom-diffusion-cat-wooden-pot"
pipeline.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.load_textual_inversion(model_id, weight_name="<new1>.bin")
pipeline.load_textual_inversion(model_id, weight_name="<new2>.bin")

image = pipeline(
    "the <new1> cat sculpture in the style of a <new2> wooden pot",
    num_inference_steps=100,
    guidance_scale=6.0,
    eta=1.0,
).images[0]
image.save("multi-subject.png")
```

</hfoption>
</hfoptions>

## Next steps

Congratulations on training a model with Custom Diffusion! 🎉 To learn more:

- Read the [Multi-Concept Customization of Text-to-Image Diffusion](https://www.cs.cmu.edu/~custom-diffusion/) blog post to learn more details about the experimental results from the Custom Diffusion team.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/custom_diffusion.md" />

### CogVideoX
https://huggingface.co/docs/diffusers/main/training/cogvideox.md

# CogVideoX

CogVideoX is a text-to-video generation model focused on creating more coherent videos aligned with a prompt. It achieves this using several methods.

- a 3D variational autoencoder that compresses videos spatially and temporally, improving compression rate and video accuracy.

- an expert transformer block to help align text and video, and a 3D full attention module for capturing and creating spatially and temporally accurate videos.

The actual test of the video instruction dimension found that CogVideoX has good effects on consistent theme, dynamic information, consistent background, object information, smooth motion, color, scene, appearance style, and temporal style but cannot achieve good results with human action, spatial relationship, and multiple objects.

Finetuning with Diffusers can help make up for these poor results. 

## Data Preparation

The training scripts accepts data in two formats.  

The first format is suited for small-scale training, and the second format uses a CSV format, which is more appropriate for streaming data for large-scale training. In the future, Diffusers will support the `<Video>` tag.

### Small format

Two files where one file contains line-separated prompts and another file contains line-separated paths to video data (the path to video files must be relative to the path you pass when specifying `--instance_data_root`). Let's take a look at an example to understand this better!

Assume you've specified `--instance_data_root` as `/dataset`, and that this directory contains the files: `prompts.txt` and `videos.txt`.

The `prompts.txt` file should contain line-separated prompts:

```
A black and white animated sequence featuring a rabbit, named Rabbity Ribfried, and an anthropomorphic goat in a musical, playful environment, showcasing their evolving interaction.
A black and white animated sequence on a ship's deck features a bulldog character, named Bully Bulldoger, showcasing exaggerated facial expressions and body language. The character progresses from confident to focused, then to strained and distressed, displaying a range of emotions as it navigates challenges. The ship's interior remains static in the background, with minimalistic details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no camera movement to distract from its evolving reactions and physical gestures.
...
```

The `videos.txt` file should contain line-separate paths to video files. Note that the path should be _relative_ to the `--instance_data_root` directory.

```
videos/00000.mp4
videos/00001.mp4
...
```

Overall, this is how your dataset would look like if you ran the `tree` command on the dataset root directory:

```
/dataset
├── prompts.txt
├── videos.txt
├── videos
    ├── videos/00000.mp4
    ├── videos/00001.mp4
    ├── ...
```

When using this format, the `--caption_column` must be `prompts.txt` and `--video_column` must be `videos.txt`.

### Stream format

You could use a single CSV file. For the sake of this example, assume you have a `metadata.csv` file. The expected format is:

```
<CAPTION_COLUMN>,<PATH_TO_VIDEO_COLUMN>
"""A black and white animated sequence featuring a rabbit, named Rabbity Ribfried, and an anthropomorphic goat in a musical, playful environment, showcasing their evolving interaction.""","""00000.mp4"""
"""A black and white animated sequence on a ship's deck features a bulldog character, named Bully Bulldoger, showcasing exaggerated facial expressions and body language. The character progresses from confident to focused, then to strained and distressed, displaying a range of emotions as it navigates challenges. The ship's interior remains static in the background, with minimalistic details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no camera movement to distract from its evolving reactions and physical gestures.""","""00001.mp4"""
...
```

In this case, the `--instance_data_root` should be the location where the videos are stored and `--dataset_name` should be either a path to local folder or a [load_dataset](https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset) compatible dataset hosted on the Hub. Assuming you have videos of Minecraft gameplay at `https://huggingface.co/datasets/my-awesome-username/minecraft-videos`, you would have to specify `my-awesome-username/minecraft-videos`.

When using this format, the `--caption_column` must be `<CAPTION_COLUMN>` and `--video_column` must be `<PATH_TO_VIDEO_COLUMN>`.

You are not strictly restricted to the CSV format. Any format works as long as the `load_dataset` method supports the file format to load a basic `<PATH_TO_VIDEO_COLUMN>` and `<CAPTION_COLUMN>`. The reason for going through these dataset organization gymnastics for loading video data is because `load_dataset` does not fully support all kinds of video formats.

> [!NOTE]
> CogVideoX works best with long and descriptive LLM-augmented prompts for video generation. We recommend pre-processing your videos by first generating a summary using a VLM and then augmenting the prompts with an LLM. To generate the above captions, we use [MiniCPM-V-26](https://huggingface.co/openbmb/MiniCPM-V-2_6) and [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). A very barebones and no-frills example for this is available [here](https://gist.github.com/a-r-r-o-w/4dee20250e82f4e44690a02351324a4a). The official recommendation for augmenting prompts is [ChatGLM](https://huggingface.co/THUDM?search_models=chatglm) and a length of 50-100 words is considered good.

>![NOTE]
> It is expected that your dataset is already pre-processed. If not, some basic pre-processing can be done by playing with the following parameters:
> `--height`, `--width`, `--fps`, `--max_num_frames`, `--skip_frames_start` and `--skip_frames_end`.
> Presently, all videos in your dataset should contain the same number of video frames when using a training batch size > 1.

<!-- TODO: Implement frame packing in future to address above issue. -->

## Training

You need to setup your development environment by installing the necessary requirements. The following packages are required:
- Torch 2.0 or above based on the training features you are utilizing (might require latest or nightly versions for quantized/deepspeed training)
- `pip install diffusers transformers accelerate peft huggingface_hub` for all things modeling and training related
- `pip install datasets decord` for loading video training data
- `pip install bitsandbytes` for using 8-bit Adam or AdamW optimizers for memory-optimized training
- `pip install wandb` optionally for monitoring training logs
- `pip install deepspeed` optionally for [DeepSpeed](https://github.com/microsoft/DeepSpeed) training
- `pip install prodigyopt` optionally if you would like to use the Prodigy optimizer for training

To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .
```

 

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

- PyTorch

```bash
cd examples/cogvideo
pip install -r requirements.txt
```

And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with:

```bash
accelerate config
```

Or for a default accelerate configuration without answering questions about your environment

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell (e.g., a notebook)

```python
from accelerate.utils import write_basic_config
write_basic_config()
```

When running `accelerate config`, if you use torch.compile, there can be dramatic speedups. The PEFT library is used as a backend for LoRA training, so make sure to have `peft>=0.6.0` installed in your environment.

If you would like to push your model to the Hub after training is completed with a neat model card, make sure you're logged in:

```bash
hf auth login

# Alternatively, you could upload your model manually using:
# hf upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
```

Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!

Assuming you are training on 50 videos of a similar concept, we have found 1500-2000 steps to work well. The official recommendation, however, is 100 videos with a total of 4000 steps. Assuming you are training on a single GPU with a `--train_batch_size` of `1`:
- 1500 steps on 50 videos would correspond to `30` training epochs
- 4000 steps on 100 videos would correspond to `40` training epochs

```bash
#!/bin/bash

GPU_IDS="0"

accelerate launch --gpu_ids $GPU_IDS examples/cogvideo/train_cogvideox_lora.py \
  --pretrained_model_name_or_path THUDM/CogVideoX-2b \
  --cache_dir <CACHE_DIR> \
  --instance_data_root <PATH_TO_WHERE_VIDEO_FILES_ARE_STORED> \
  --dataset_name my-awesome-name/my-awesome-dataset \
  --caption_column <CAPTION_COLUMN> \
  --video_column <PATH_TO_VIDEO_COLUMN> \
  --id_token <ID_TOKEN> \
  --validation_prompt "<ID_TOKEN> Spiderman swinging over buildings:::A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance" \
  --validation_prompt_separator ::: \
  --num_validation_videos 1 \
  --validation_epochs 10 \
  --seed 42 \
  --rank 64 \
  --lora_alpha 64 \
  --mixed_precision fp16 \
  --output_dir /raid/aryan/cogvideox-lora \
  --height 480 --width 720 --fps 8 --max_num_frames 49 --skip_frames_start 0 --skip_frames_end 0 \
  --train_batch_size 1 \
  --num_train_epochs 30 \
  --checkpointing_steps 1000 \
  --gradient_accumulation_steps 1 \
  --learning_rate 1e-3 \
  --lr_scheduler cosine_with_restarts \
  --lr_warmup_steps 200 \
  --lr_num_cycles 1 \
  --enable_slicing \
  --enable_tiling \
  --optimizer Adam \
  --adam_beta1 0.9 \
  --adam_beta2 0.95 \
  --max_grad_norm 1.0 \
  --report_to wandb
```

To better track our training experiments, we're using the following flags in the command above:
* `--report_to wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.

Setting the `<ID_TOKEN>` is not necessary. From some limited experimentation, we found it works better (as it resembles [Dreambooth](https://huggingface.co/docs/diffusers/en/training/dreambooth) training) than without. When provided, the `<ID_TOKEN>` is appended to the beginning of each prompt. So, if your `<ID_TOKEN>` was `"DISNEY"` and your prompt was `"Spiderman swinging over buildings"`, the effective prompt used in training would be `"DISNEY Spiderman swinging over buildings"`. When not provided, you would either be training without any additional token or could augment your dataset to apply the token where you wish before starting the training.

> [!NOTE]
> You can pass `--use_8bit_adam` to reduce the memory requirements of training.

> [!IMPORTANT]
> The following settings have been tested at the time of adding CogVideoX LoRA training support:
> - Our testing was primarily done on CogVideoX-2b. We will work on CogVideoX-5b and CogVideoX-5b-I2V soon
> - One dataset comprised of 70 training videos of resolutions `200 x 480 x 720` (F x H x W). From this, by using frame skipping in data preprocessing, we created two smaller 49-frame and 16-frame datasets for faster experimentation and because the maximum limit recommended by the CogVideoX team is 49 frames. Out of the 70 videos, we created three groups of 10, 25 and 50 videos. All videos were similar in nature of the concept being trained.
> - 25+ videos worked best for training new concepts and styles.
> - We found that it is better to train with an identifier token that can be specified as `--id_token`. This is similar to Dreambooth-like training but normal finetuning without such a token works too.
> - Trained concept seemed to work decently well when combined with completely unrelated prompts. We expect even better results if CogVideoX-5B is finetuned.
> - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`.
> - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results.
> - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient.
> - When using the Prodigy optimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
> - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos.
>
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.

<!-- TODO: Test finetuning with CogVideoX-5b and CogVideoX-5b-I2V and update scripts accordingly -->

## Inference

Once you have trained a lora model, the inference can be done simply loading the lora weights into the `CogVideoXPipeline`.

```python
import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video

pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-2b", torch_dtype=torch.float16)
# pipe.load_lora_weights("/path/to/lora/weights", adapter_name="cogvideox-lora") # Or,
pipe.load_lora_weights("my-awesome-hf-username/my-awesome-lora-name", adapter_name="cogvideox-lora") # If loading from the HF Hub
pipe.to("cuda")

# Assuming lora_alpha=32 and rank=64 for training. If different, set accordingly
pipe.set_adapters(["cogvideox-lora"], [32 / 64])

prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
frames = pipe(prompt, guidance_scale=6, use_dynamic_cfg=True).frames[0]
export_to_video(frames, "output.mp4", fps=8)
```


## Reduce memory usage

While testing using the diffusers library, all optimizations included in the diffusers library were enabled. This
scheme has not been tested for actual memory usage on devices outside of **NVIDIA A100 / H100** architectures.
Generally, this scheme can be adapted to all **NVIDIA Ampere architecture** and above devices. If optimizations are
disabled, memory consumption will multiply, with peak memory usage being about 3 times the value in the table.
However, speed will increase by about 3-4 times. You can selectively disable some optimizations, including:

```
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
```

+ For multi-GPU inference, the `enable_sequential_cpu_offload()` optimization needs to be disabled.
+ Using INT8 models will slow down inference, which is done to accommodate lower-memory GPUs while maintaining minimal
  video quality loss, though inference speed will significantly decrease.
+ The CogVideoX-2B model was trained in `FP16` precision, and all CogVideoX-5B models were trained in `BF16` precision.
  We recommend using the precision in which the model was trained for inference.
+ [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
  used to quantize the text encoder, transformer, and VAE modules to reduce the memory requirements of CogVideoX. This
  allows the model to run on free T4 Colabs or GPUs with smaller memory! Also, note that TorchAO quantization is fully
  compatible with `torch.compile`, which can significantly improve inference speed. FP8 precision must be used on
  devices with NVIDIA H100 and above, requiring source installation of `torch`, `torchao`, `diffusers`, and `accelerate`
  Python packages. CUDA 12.4 is recommended.
+ The inference speed tests also used the above memory optimization scheme. Without memory optimization, inference speed
  increases by about 10%. Only the `diffusers` version of the model supports quantization.
+ The model only supports English input; other languages can be translated into English for use via large model
  refinement.
+ The memory usage of model fine-tuning is tested in an `8 * H100` environment, and the program automatically
  uses `Zero 2` optimization. If a specific number of GPUs is marked in the table, that number or more GPUs must be used
  for fine-tuning.


 | **Attribute**                        | **CogVideoX-2B**                                                       | **CogVideoX-5B**                                                       |
| ------------------------------------ | ---------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| **Model Name**                       | CogVideoX-2B                                                           | CogVideoX-5B                                                           |
| **Inference Precision**              | FP16* (Recommended), BF16, FP32, FP8*, INT8, Not supported INT4         | BF16 (Recommended), FP16, FP32, FP8*, INT8, Not supported INT4         |
| **Single GPU Inference VRAM**        | FP16: Using diffusers 12.5GB* INT8: Using diffusers with torchao 7.8GB* | BF16: Using diffusers 20.7GB* INT8: Using diffusers with torchao 11.4GB* |
| **Multi GPU Inference VRAM**         | FP16: Using diffusers 10GB*                                             | BF16: Using diffusers 15GB*                                             |
| **Inference Speed**                  | Single A100: ~90 seconds, Single H100: ~45 seconds                      | Single A100: ~180 seconds, Single H100: ~90 seconds                     |
| **Fine-tuning Precision**            | FP16                                                                   | BF16                                                                   |
| **Fine-tuning VRAM Consumption**     | 47 GB (bs=1, LORA) 61 GB (bs=2, LORA) 62GB (bs=1, SFT)                 | 63 GB (bs=1, LORA) 80 GB (bs=2, LORA) 75GB (bs=1, SFT)                 |


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/cogvideox.md" />

### Reinforcement learning training with DDPO
https://huggingface.co/docs/diffusers/main/training/ddpo.md

# Reinforcement learning training with DDPO

You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in [Training Diffusion Models with Reinforcement Learning](https://huggingface.co/papers/2305.13301), which is implemented in 🤗 TRL with the `DDPOTrainer`.

For more information, check out the `DDPOTrainer` API reference and the [Finetune Stable Diffusion Models with DDPO via TRL](https://huggingface.co/blog/trl-ddpo) blog post.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/ddpo.md" />

### Create a dataset for training
https://huggingface.co/docs/diffusers/main/training/create_dataset.md

# Create a dataset for training

There are many datasets on the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-to-image&sort=downloads) to train a model on, but if you can't find one you're interested in or want to use your own, you can create a dataset with the 🤗 [Datasets](https://huggingface.co/docs/datasets) library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation.

This guide will show you two ways to create a dataset to finetune on:

- provide a folder of images to the `--train_data_dir` argument
- upload a dataset to the Hub and pass the dataset repository id to the `--dataset_name` argument

> [!TIP]
> 💡 Learn more about how to create an image dataset for training in the [Create an image dataset](https://huggingface.co/docs/datasets/image_dataset) guide.

## Provide a dataset as a folder

For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the [`ImageFolder`](https://huggingface.co/docs/datasets/en/image_dataset#imagefolder) builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like:

```bash
data_dir/xxx.png
data_dir/xxy.png
data_dir/[...]/xxz.png
```

Pass the path to the dataset directory to the `--train_data_dir` argument, and then you can start training:

```bash
accelerate launch train_unconditional.py \
    --train_data_dir <path-to-train-directory> \
    <other-arguments>
```

## Upload your data to the Hub

> [!TIP]
> 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the [Image search with 🤗 Datasets](https://huggingface.co/blog/image-search-datasets) post.

Start by creating a dataset with the [`ImageFolder`](https://huggingface.co/docs/datasets/image_load#imagefolder) feature, which creates an `image` column containing the PIL-encoded images.

You can use the `data_dir` or `data_files` parameters to specify the location of the dataset. The `data_files` parameter supports mapping specific files to dataset splits like `train` or `test`:

```python
from datasets import load_dataset

# example 1: local folder
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")

# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")

# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset(
    "imagefolder",
    data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip",
)

# example 4: providing several splits
dataset = load_dataset(
    "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]}
)
```

Then use the [push_to_hub](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.push_to_hub) method to upload the dataset to the Hub:

```python
# assuming you have ran the hf auth login command in a terminal
dataset.push_to_hub("name_of_your_dataset")

# if you want to push to a private repo, simply pass private=True:
dataset.push_to_hub("name_of_your_dataset", private=True)
```

Now the dataset is available for training by passing the dataset name to the `--dataset_name` argument:

```bash
accelerate launch --mixed_precision="fp16"  train_text_to_image.py \
  --pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
  --dataset_name="name_of_your_dataset" \
  <other-arguments>
```

## Next steps

Now that you've created a dataset, you can plug it into the `train_data_dir` (if your dataset is local) or `dataset_name` (if your dataset is on the Hub) arguments of a training script.

For your next steps, feel free to try and use your dataset to train a model for [unconditional generation](unconditional_training) or [text-to-image generation](text2image)!


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/create_dataset.md" />

### InstructPix2Pix
https://huggingface.co/docs/diffusers/main/training/instructpix2pix.md

# InstructPix2Pix

[InstructPix2Pix](https://hf.co/papers/2211.09800) is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be "turn the clouds rainy" and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image.

This guide will explore the [train_instruct_pix2pix.py](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py) training script to help you become familiar with it, and how you can adapt it for your own use case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/instruct_pix2pix
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py) and let us know if you have any questions or concerns.

## Script parameters

The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L65) function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you'd like.

For example, to increase the resolution of the input image:

```bash
accelerate launch train_instruct_pix2pix.py \
  --resolution=512 \
```

Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix:

- `--original_image_column`: the original image before the edits are made
- `--edited_image_column`: the image after the edits are made
- `--edit_prompt_column`: the instructions to edit the image
- `--conditioning_dropout_prob`: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs

## Training script

The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L374) function. This is where you'll make your changes to the training script to adapt it for your own use-case.

As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script.

The script begins by modifying the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:

```py
in_channels = 8
out_channels = unet.conv_in.out_channels
unet.register_to_config(in_channels=in_channels)

with torch.no_grad():
    new_conv_in = nn.Conv2d(
        in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding
    )
    new_conv_in.weight.zero_()
    new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
    unet.conv_in = new_conv_in
```

These UNet parameters are [updated](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L545C1-L551C6) by the optimizer:

```py
optimizer = optimizer_cls(
    unet.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Next, the edited images and edit instructions are [preprocessed](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L624) and [tokenized](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L610C24-L610C24). It is important the same image transformations are applied to the original and edited images.

```py
def preprocess_train(examples):
    preprocessed_images = preprocess_images(examples)

    original_images, edited_images = preprocessed_images.chunk(2)
    original_images = original_images.reshape(-1, 3, args.resolution, args.resolution)
    edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution)

    examples["original_pixel_values"] = original_images
    examples["edited_pixel_values"] = edited_images

    captions = list(examples[edit_prompt_column])
    examples["input_ids"] = tokenize_captions(captions)
    return examples
```

Finally, in the [training loop](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L730), it starts by encoding the edited images into latent space:

```py
latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample()
latents = latents * vae.config.scaling_factor
```

Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image.

```py
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode()

if args.conditioning_dropout_prob is not None:
    random_p = torch.rand(bsz, device=latents.device, generator=generator)
    prompt_mask = random_p < 2 * args.conditioning_dropout_prob
    prompt_mask = prompt_mask.reshape(bsz, 1, 1)
    null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0]
    encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states)

    image_mask_dtype = original_image_embeds.dtype
    image_mask = 1 - (
        (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype)
        * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype)
    )
    image_mask = image_mask.reshape(bsz, 1, 1, 1)
    original_image_embeds = image_mask * original_image_embeds
```

That's pretty much it! Aside from the differences described here, the rest of the script is very similar to the [Text-to-image](text2image#training-script) training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you're happy with the changes to your script or if you're okay with the default configuration, you're ready to launch the training script! 🚀

This guide uses the [fusing/instructpix2pix-1000-samples](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) dataset, which is a smaller version of the [original dataset](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered). You can also create and use your own dataset if you'd like (see the [Create a dataset for training](create_dataset) guide).

Set the `MODEL_NAME` environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the `DATASET_ID` to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository.

> [!TIP]
> For better results, try longer training runs with a larger dataset. We've only tested this training script on a smaller-scale dataset.
>
> <br>
>
> To monitor training progress with Weights and Biases, add the `--report_to=wandb` parameter to the training command and specify a validation image with `--val_image_url` and a validation prompt with `--validation_prompt`. This can be really useful for debugging the model.

If you’re training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.

```bash
accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
    --pretrained_model_name_or_path=$MODEL_NAME \
    --dataset_name=$DATASET_ID \
    --enable_xformers_memory_efficient_attention \
    --resolution=256 \
    --random_flip \
    --train_batch_size=4 \
    --gradient_accumulation_steps=4 \
    --gradient_checkpointing \
    --max_train_steps=15000 \
    --checkpointing_steps=5000 \
    --checkpoints_total_limit=1 \
    --learning_rate=5e-05 \
    --max_grad_norm=1 \
    --lr_warmup_steps=0 \
    --conditioning_dropout_prob=0.05 \
    --mixed_precision=fp16 \
    --seed=42 \
    --push_to_hub
```

After training is finished, you can use your new InstructPix2Pix for inference:

```py
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline
from diffusers.utils import load_image

pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda")
generator = torch.Generator("cuda").manual_seed(0)

image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png")
prompt = "add some ducks to the lake"
num_inference_steps = 20
image_guidance_scale = 1.5
guidance_scale = 10

edited_image = pipeline(
   prompt,
   image=image,
   num_inference_steps=num_inference_steps,
   image_guidance_scale=image_guidance_scale,
   guidance_scale=guidance_scale,
   generator=generator,
).images[0]
edited_image.save("edited_image.png")
```

You should experiment with different `num_inference_steps`, `image_guidance_scale`, and `guidance_scale` values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image.

## Stable Diffusion XL

Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [`train_instruct_pix2pix_sdxl.py`](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix_sdxl.py) script to train a SDXL model to follow image editing instructions.

The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.

## Next steps

Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to:

- Read the [Instruction-tuning Stable Diffusion with InstructPix2Pix](https://huggingface.co/blog/instruction-tuning-sd) blog post to learn more about some experiments we've done with InstructPix2Pix, dataset preparation, and results for different instructions.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/instructpix2pix.md" />

### Wuerstchen
https://huggingface.co/docs/diffusers/main/training/wuerstchen.md

# Wuerstchen

The [Wuerstchen](https://hf.co/papers/2306.00637) model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image.

To fit the prior model into GPU memory and to speedup training, try enabling `gradient_accumulation_steps`, `gradient_checkpointing`, and `mixed_precision` respectively.

This guide explores the [train_text_to_image_prior.py](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_prior.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/wuerstchen/text_to_image
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn't cover every aspect of the [script](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_prior.py) in detail. If you're interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns.

## Script parameters

The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L192) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_text_to_image_prior.py \
  --mixed_precision="fp16"
```

Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so let's dive right into the Wuerstchen training script!

## Training script

The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script.

The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L441) function starts by initializing the image encoder - an [EfficientNet](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/modeling_efficient_net_encoder.py) - in addition to the usual scheduler and tokenizer.

```py
with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
    pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt")
    state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu")
    image_encoder = EfficientNetEncoder()
    image_encoder.load_state_dict(state_dict["effnet_state_dict"])
    image_encoder.eval()
```

You'll also load the `WuerstchenPrior` model for optimization.

```py
prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior")

optimizer = optimizer_cls(
    prior.parameters(),
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Next, you'll apply some [transforms](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L656) to the images and [tokenize](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L637) the captions:

```py
def preprocess_train(examples):
    images = [image.convert("RGB") for image in examples[image_column]]
    examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images]
    examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples)
    return examples
```

Finally, the [training loop](https://github.com/huggingface/diffusers/blob/65ef7a0c5c594b4f84092e328fbdd73183613b30/examples/wuerstchen/text_to_image/train_text_to_image_prior.py#L656) handles compressing the images to latent space with the `EfficientNetEncoder`, adding noise to the latents, and predicting the noise residual with the `WuerstchenPrior` model.

```py
pred_noise = prior(noisy_latents, timesteps, prompt_embeds)
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀

Set the `DATASET_NAME` environment variable to the dataset name from the Hub. This guide uses the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset, but you can create and train on your own datasets as well (see the [Create a dataset for training](create_dataset) guide).

> [!TIP]
> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.

```bash
export DATASET_NAME="lambdalabs/naruto-blip-captions"

accelerate launch  train_text_to_image_prior.py \
  --mixed_precision="fp16" \
  --dataset_name=$DATASET_NAME \
  --resolution=768 \
  --train_batch_size=4 \
  --gradient_accumulation_steps=4 \
  --gradient_checkpointing \
  --dataloader_num_workers=4 \
  --max_train_steps=15000 \
  --learning_rate=1e-05 \
  --max_grad_norm=1 \
  --checkpoints_total_limit=3 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --validation_prompts="A robot naruto, 4k photo" \
  --report_to="wandb" \
  --push_to_hub \
  --output_dir="wuerstchen-prior-naruto-model"
```

Once training is complete, you can use your newly trained model for inference!

```py
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS

pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda")

caption = "A cute bird naruto holding a shield"
images = pipeline(
    caption,
    width=1024,
    height=1536,
    prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS,
    prior_guidance_scale=4.0,
    num_images_per_prompt=2,
).images
```

## Next steps

Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful:

- Take a look at the [Wuerstchen](../api/pipelines/wuerstchen#text-to-image-generation) API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/wuerstchen.md" />

### Textual Inversion
https://huggingface.co/docs/diffusers/main/training/text_inversion.md

# Textual Inversion

[Textual Inversion](https://hf.co/papers/2208.01618) is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide.

If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers).

This guide will explore the [textual_inversion.py](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Navigate to the example folder with the training script and install the required dependencies for the script you're using:

```bash
cd examples/textual_inversion
pip install -r requirements.txt
```
> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py) and let us know if you have any questions or concerns.

## Script parameters

The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/839c2a5ece0af4e75530cb520d77bc7ed8acf474/examples/textual_inversion/textual_inversion.py#L176) function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you'd like.

For example, to increase the number of gradient accumulation steps above the default value of 1:

```bash
accelerate launch textual_inversion.py \
  --gradient_accumulation_steps=4
```

Some other basic and important parameters to specify include:

- `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model
- `--train_data_dir`: path to a folder containing the training dataset (example images)
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command
- `--num_vectors`: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs
- `--placeholder_token`: the special word to tie the learned embeddings to (you must use the word in your prompt for inference)
- `--initializer_token`: a single-word that roughly describes the object or style you're trying to train on
- `--learnable_property`: whether you're training the model to learn a new "style" (for example, Van Gogh's painting style) or "object" (for example, your dog)

## Training script

Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, [`TextualInversionDataset`](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L487) for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify `TextualInversionDataset`.

Next, you'll find the dataset preprocessing code and training loop in the [`main()`](https://github.com/huggingface/diffusers/blob/839c2a5ece0af4e75530cb520d77bc7ed8acf474/examples/textual_inversion/textual_inversion.py#L573) function.

The script starts by loading the [tokenizer](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L616), [scheduler and model](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L622):

```py
# Load tokenizer
if args.tokenizer_name:
    tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
elif args.pretrained_model_name_or_path:
    tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")

# Load scheduler and models
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
text_encoder = CLIPTextModel.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
)
vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
unet = UNet2DConditionModel.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
)
```

The special [placeholder token](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L632) is added next to the tokenizer, and the embedding is readjusted to account for the new token.

Then, the script [creates a dataset](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L716) from the `TextualInversionDataset`:

```py
train_dataset = TextualInversionDataset(
    data_root=args.train_data_dir,
    tokenizer=tokenizer,
    size=args.resolution,
    placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))),
    repeats=args.repeats,
    learnable_property=args.learnable_property,
    center_crop=args.center_crop,
    set="train",
)
train_dataloader = torch.utils.data.DataLoader(
    train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers
)
```

Finally, the [training loop](https://github.com/huggingface/diffusers/blob/b81c69e489aad3a0ba73798c459a33990dc4379c/examples/textual_inversion/textual_inversion.py#L784) handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token.

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀

For this guide, you'll download some images of a [cat toy](https://huggingface.co/datasets/diffusers/cat_toy_example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).

```py
from huggingface_hub import snapshot_download

local_dir = "./cat"
snapshot_download(
    "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes"
)
```

Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, and `DATA_DIR`  to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository:

- `learned_embeds.bin`: the learned embedding vectors corresponding to your example images
- `token_identifier.txt`: the special placeholder token
- `type_of_concept.txt`: the type of concept you're training on (either "object" or "style")

> [!WARNING]
> A full training run takes ~1 hour on a single V100 GPU.

One more thing before you launch the script. If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command:

```bash
--validation_prompt="A <cat-toy> train"
--num_validation_images=4
--validation_steps=100
```

```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export DATA_DIR="./cat"

accelerate launch textual_inversion.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --train_data_dir=$DATA_DIR \
  --learnable_property="object" \
  --placeholder_token="<cat-toy>" \
  --initializer_token="toy" \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --max_train_steps=3000 \
  --learning_rate=5.0e-04 \
  --scale_lr \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --output_dir="textual_inversion_cat" \
  --push_to_hub
```

After training is complete, you can use your newly trained model for inference like:

```py
from diffusers import StableDiffusionPipeline
import torch

pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_textual_inversion("sd-concepts-library/cat-toy")
image = pipeline("A <cat-toy> train", num_inference_steps=50).images[0]
image.save("cat-train.png")
```

## Next steps

Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful:

- Learn how to [load Textual Inversion embeddings](../using-diffusers/textual_inversion_inference) and also use them as negative embeddings.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/text_inversion.md" />

### T2I-Adapter
https://huggingface.co/docs/diffusers/main/training/t2i_adapters.md

# T2I-Adapter

[T2I-Adapter](https://hf.co/papers/2302.08453) is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it.

The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model.

This guide will explore the [train_t2i_adapter_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/train_t2i_adapter_sdxl.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/t2i_adapter
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/train_t2i_adapter_sdxl.py) and let us know if you have any questions or concerns.

## Script parameters

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L233) function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to activate gradient accumulation, add the `--gradient_accumulation_steps` parameter to the training command:

```bash
accelerate launch train_t2i_adapter_sdxl.py \
  ----gradient_accumulation_steps=4
```

Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant T2I-Adapter parameters:

- `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better [VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)
- `--crops_coords_top_left_h` and `--crops_coords_top_left_w`: height and width coordinates to include in SDXL's crop coordinate embeddings
- `--conditioning_image_column`: the column of the conditioning images in the dataset
- `--proportion_empty_prompts`: the proportion of image prompts to replace with empty strings

## Training script

As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script.

The training script begins by preparing the dataset. This includes [tokenizing](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L674) the prompt and [applying transforms](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L714) to the images and conditioning images.

```py
conditioning_image_transforms = transforms.Compose(
    [
        transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
        transforms.CenterCrop(args.resolution),
        transforms.ToTensor(),
    ]
)
```

Within the [`main()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L770) function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized:

```py
if args.adapter_model_name_or_path:
    logger.info("Loading existing adapter weights.")
    t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path)
else:
    logger.info("Initializing t2iadapter weights.")
    t2iadapter = T2IAdapter(
        in_channels=3,
        channels=(320, 640, 1280, 1280),
        num_res_blocks=2,
        downscale_factor=16,
        adapter_type="full_adapter_xl",
    )
```

The [optimizer](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L952) is initialized for the T2I-Adapter parameters:

```py
params_to_optimize = t2iadapter.parameters()
optimizer = optimizer_class(
    params_to_optimize,
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Lastly, in the [training loop](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/t2i_adapter/train_t2i_adapter_sdxl.py#L1086), the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual:

```py
t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype)
down_block_additional_residuals = t2iadapter(t2iadapter_image)
down_block_additional_residuals = [
    sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals
]

model_pred = unet(
    inp_noisy_latents,
    timesteps,
    encoder_hidden_states=batch["prompt_ids"],
    added_cond_kwargs=batch["unet_added_conditions"],
    down_block_additional_residuals=down_block_additional_residuals,
).sample
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Now you’re ready to launch the training script! 🚀

For this example training, you'll use the [fusing/fill50k](https://huggingface.co/datasets/fusing/fill50k) dataset. You can also create and use your own dataset if you want (see the [Create a dataset for training](https://moon-ci-docs.huggingface.co/docs/diffusers/pr_5512/en/training/create_dataset) guide).

Set the environment variable `MODEL_DIR` to a model id on the Hub or a path to a local model and `OUTPUT_DIR` to where you want to save the model.

Download the following images to condition your training with:

```bash
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png
```

> [!TIP]
> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You'll also need to add the `--validation_image`, `--validation_prompt`, and `--validation_steps` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.

```bash
export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="path to save model"

accelerate launch train_t2i_adapter_sdxl.py \
 --pretrained_model_name_or_path=$MODEL_DIR \
 --output_dir=$OUTPUT_DIR \
 --dataset_name=fusing/fill50k \
 --mixed_precision="fp16" \
 --resolution=1024 \
 --learning_rate=1e-5 \
 --max_train_steps=15000 \
 --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
 --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
 --validation_steps=100 \
 --train_batch_size=1 \
 --gradient_accumulation_steps=4 \
 --report_to="wandb" \
 --seed=42 \
 --push_to_hub
```

Once training is complete, you can use your T2I-Adapter for inference:

```py
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest
from diffusers.utils import load_image
import torch

adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16)
pipeline = StableDiffusionXLAdapterPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16
)

pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config)
pipeline.enable_xformers_memory_efficient_attention()
pipeline.enable_model_cpu_offload()

control_image = load_image("./conditioning_image_1.png")
prompt = "pale golden rod circle with old lace background"

generator = torch.manual_seed(0)
image = pipeline(
    prompt, image=control_image, generator=generator
).images[0]
image.save("./output.png")
```

## Next steps

Congratulations on training a T2I-Adapter model! 🎉 To learn more:

- Read the [Efficient Controllable Generation for SDXL with T2I-Adapters](https://huggingface.co/blog/t2i-sdxl-adapters) blog post to learn more details about the experimental results from the T2I-Adapter team.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/t2i_adapters.md" />

### Stable Diffusion XL
https://huggingface.co/docs/diffusers/main/training/sdxl.md

# Stable Diffusion XL

> [!WARNING]
> This script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset.

[Stable Diffusion XL (SDXL)](https://hf.co/papers/2307.01952) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images.

SDXL's UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling `gradient_checkpointing`, `mixed_precision`, and `gradient_accumulation_steps`. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) and using [bitsandbytes'](https://github.com/TimDettmers/bitsandbytes) 8-bit optimizer.

This guide will explore the [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) training script to help you become more familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/text_to_image
pip install -r requirements_sdxl.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

## Script parameters

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) and let us know if you have any questions or concerns.

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L129) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the bf16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_text_to_image_sdxl.py \
  --mixed_precision="bf16"
```

Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so you'll focus on the parameters that are relevant to training SDXL in this guide.

- `--pretrained_vae_model_name_or_path`: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better [VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)
- `--proportion_empty_prompts`: the proportion of image prompts to replace with empty strings
- `--timestep_bias_strategy`: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details
- `--timestep_bias_multiplier`: the weight of the bias to apply to the timestep
- `--timestep_bias_begin`: the timestep to begin applying the bias
- `--timestep_bias_end`: the timestep to end applying the bias
- `--timestep_bias_portion`: the proportion of timesteps to apply the bias to

### Min-SNR weighting

The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch.

Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:

```bash
accelerate launch train_text_to_image_sdxl.py \
  --snr_gamma=5.0
```

## Training script

The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script.

It starts by creating functions to [tokenize the prompts](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L478) to calculate the prompt embeddings, and to compute the image embeddings with the [VAE](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L519). Next, you'll a function to [generate the timesteps weights](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L531) depending on the number of timesteps and the timestep bias strategy to apply.

Within the [`main()`](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L572) function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each:

```py
tokenizer_one = AutoTokenizer.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False
)
tokenizer_two = AutoTokenizer.from_pretrained(
    args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False
)

text_encoder_cls_one = import_model_class_from_model_name_or_path(
    args.pretrained_model_name_or_path, args.revision
)
text_encoder_cls_two = import_model_class_from_model_name_or_path(
    args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
)
```

The [prompt and image embeddings](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L857) are computed first and kept in memory, which isn't typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this [PR](https://github.com/huggingface/diffusers/pull/4505) for more discussion about this topic).

```py
text_encoders = [text_encoder_one, text_encoder_two]
tokenizers = [tokenizer_one, tokenizer_two]
compute_embeddings_fn = functools.partial(
    encode_prompt,
    text_encoders=text_encoders,
    tokenizers=tokenizers,
    proportion_empty_prompts=args.proportion_empty_prompts,
    caption_column=args.caption_column,
)

train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
train_dataset = train_dataset.map(
    compute_vae_encodings_fn,
    batched=True,
    batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps,
    new_fingerprint=new_fingerprint_for_vae,
)
```

After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory:

```py
del text_encoders, tokenizers, vae
gc.collect()
torch.cuda.empty_cache()
```

Finally, the [training loop](https://github.com/huggingface/diffusers/blob/aab6de22c33cc01fb7bc81c0807d6109e2c998c9/examples/text_to_image/train_text_to_image_sdxl.py#L943) takes care of the rest. If you chose to apply a timestep bias strategy, you'll see the timestep weights are calculated and added as noise:

```py
weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to(
        model_input.device
    )
    timesteps = torch.multinomial(weights, bsz, replacement=True).long()

noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps)
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀

Let’s train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with `VAE_NAME` to avoid numerical instabilities.

> [!TIP]
> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` and `--validation_epochs` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results.

```bash
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
export DATASET_NAME="lambdalabs/naruto-blip-captions"

accelerate launch train_text_to_image_sdxl.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --pretrained_vae_model_name_or_path=$VAE_NAME \
  --dataset_name=$DATASET_NAME \
  --enable_xformers_memory_efficient_attention \
  --resolution=512 \
  --center_crop \
  --random_flip \
  --proportion_empty_prompts=0.2 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --gradient_checkpointing \
  --max_train_steps=10000 \
  --use_8bit_adam \
  --learning_rate=1e-06 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --mixed_precision="fp16" \
  --report_to="wandb" \
  --validation_prompt="a cute Sundar Pichai creature" \
  --validation_epochs 5 \
  --checkpointing_steps=5000 \
  --output_dir="sdxl-naruto-model" \
  --push_to_hub
```

After you've finished training, you can use your newly trained SDXL model for inference!

<hfoptions id="inference">
<hfoption id="PyTorch">

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda")

prompt = "A naruto with green eyes and red legs."
image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
image.save("naruto.png")
```

</hfoption>
<hfoption id="PyTorch XLA">

[PyTorch XLA](https://pytorch.org/xla) allows you to run PyTorch on XLA devices such as TPUs, which can be faster. The initial warmup step takes longer because the model needs to be compiled and optimized. However, subsequent calls to the pipeline on an input **with the same length** as the original prompt are much faster because it can reuse the optimized graph.

```py
from diffusers import DiffusionPipeline
import torch
import torch_xla.core.xla_model as xm

device = xm.xla_device()
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").to(device)

prompt = "A naruto with green eyes and red legs."
start = time()
image = pipeline(prompt, num_inference_steps=inference_steps).images[0]
print(f'Compilation time is {time()-start} sec')
image.save("naruto.png")

start = time()
image = pipeline(prompt, num_inference_steps=inference_steps).images[0]
print(f'Inference time is {time()-start} sec after compilation')
```

</hfoption>
</hfoptions>

## Next steps

Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful:

- Read the [Stable Diffusion XL](../using-diffusers/sdxl) guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it's refiner model, and the different types of micro-conditionings.
- Check out the [DreamBooth](dreambooth) and [LoRA](lora) training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined!

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/sdxl.md" />

### ControlNet
https://huggingface.co/docs/diffusers/main/training/controlnet.md

# ControlNet

[ControlNet](https://hf.co/papers/2302.05543) models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more.

If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing`, `gradient_accumulation_steps`, and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers).

This guide will explore the [train_controlnet.py](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.

Before running the script, make sure you install the library from source:

```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:

```bash
cd examples/controlnet
pip install -r requirements.txt
```

> [!TIP]
> 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.

Initialize an 🤗 Accelerate environment:

```bash
accelerate config
```

To setup a default 🤗 Accelerate environment without choosing any configurations:

```bash
accelerate config default
```

Or if your environment doesn't support an interactive shell, like a notebook, you can use:

```py
from accelerate.utils import write_basic_config

write_basic_config()
```

Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.

> [!TIP]
> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet.py) and let us know if you have any questions or concerns.

## Script parameters

The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L231) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.

For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:

```bash
accelerate launch train_controlnet.py \
  --mixed_precision="fp16"
```

Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant parameters for ControlNet:

- `--max_train_samples`: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you'll need to include this parameter and the `--streaming` parameter in your training command
- `--gradient_accumulation_steps`: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle

### Min-SNR weighting

The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch.

Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:

```bash
accelerate launch train_controlnet.py \
  --snr_gamma=5.0
```

## Training script

As with the script parameters, a general walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script.

The training script has a [`make_train_dataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L582) function for preprocessing the dataset with image transforms and caption tokenization. You'll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image.

> [!TIP]
> If you're streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you're encouraged to explore other dataset formats like [WebDataset](https://webdataset.github.io/webdataset/), [TorchData](https://github.com/pytorch/data), and [TensorFlow Datasets](https://www.tensorflow.org/datasets/tfless_tfds).

```py
conditioning_image_transforms = transforms.Compose(
    [
        transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
        transforms.CenterCrop(args.resolution),
        transforms.ToTensor(),
    ]
)
```

Within the [`main()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L713) function, you'll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet:

```py
if args.controlnet_model_name_or_path:
    logger.info("Loading existing controlnet weights")
    controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path)
else:
    logger.info("Initializing controlnet weights from unet")
    controlnet = ControlNetModel.from_unet(unet)
```

The [optimizer](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L871) is set up to update the ControlNet parameters:

```py
params_to_optimize = controlnet.parameters()
optimizer = optimizer_class(
    params_to_optimize,
    lr=args.learning_rate,
    betas=(args.adam_beta1, args.adam_beta2),
    weight_decay=args.adam_weight_decay,
    eps=args.adam_epsilon,
)
```

Finally, in the [training loop](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/controlnet/train_controlnet.py#L943), the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model:

```py
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype)

down_block_res_samples, mid_block_res_sample = controlnet(
    noisy_latents,
    timesteps,
    encoder_hidden_states=encoder_hidden_states,
    controlnet_cond=controlnet_image,
    return_dict=False,
)
```

If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process.

## Launch the script

Now you're ready to launch the training script! 🚀

This guide uses the [fusing/fill50k](https://huggingface.co/datasets/fusing/fill50k) dataset, but remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).

Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model and `OUTPUT_DIR` to where you want to save the model.

Download the following images to condition your training with:

```bash
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png
```

One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.

<hfoptions id="gpu-select">
<hfoption id="16GB">

On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes:

```py
pip install bitsandbytes
```

Then, add the following parameter to your training command:

```bash
accelerate launch train_controlnet.py \
  --gradient_checkpointing \
  --use_8bit_adam \
```

</hfoption>
<hfoption id="12GB">

On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage.

```bash
accelerate launch train_controlnet.py \
  --use_8bit_adam \
  --gradient_checkpointing \
  --enable_xformers_memory_efficient_attention \
  --set_grads_to_none \
```

</hfoption>
<hfoption id="8GB">

On a 8GB GPU, you'll need to use [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory.

Run the following command to configure your 🤗 Accelerate environment:

```bash
accelerate config
```

During configuration, confirm that you want to use DeepSpeed stage 2. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options. Your configuration file should look something like:

```bash
compute_environment: LOCAL_MACHINE
deepspeed_config:
  gradient_accumulation_steps: 4
  offload_optimizer_device: cpu
  offload_param_device: cpu
  zero3_init_flag: false
  zero_stage: 2
distributed_type: DEEPSPEED
```

You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch.

bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment.

That's it! You don't need to add any additional parameters to your training command.

</hfoption>
</hfoptions>

```bash
export MODEL_DIR="stable-diffusion-v1-5/stable-diffusion-v1-5"
export OUTPUT_DIR="path/to/save/model"

accelerate launch train_controlnet.py \
 --pretrained_model_name_or_path=$MODEL_DIR \
 --output_dir=$OUTPUT_DIR \
 --dataset_name=fusing/fill50k \
 --resolution=512 \
 --learning_rate=1e-5 \
 --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
 --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
 --train_batch_size=1 \
 --gradient_accumulation_steps=4 \
 --push_to_hub
```

Once training is complete, you can use your newly trained model for inference!

```py
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
from diffusers.utils import load_image
import torch

controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16)
pipeline = StableDiffusionControlNetPipeline.from_pretrained(
    "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16
).to("cuda")

control_image = load_image("./conditioning_image_1.png")
prompt = "pale golden rod circle with old lace background"

generator = torch.manual_seed(0)
image = pipeline(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0]
image.save("./output.png")
```

## Stable Diffusion XL

Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [`train_controlnet_sdxl.py`](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sdxl.py) script to train a ControlNet adapter for the SDXL model.

The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide.

## Next steps

Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful:

- Learn how to [use a ControlNet](../using-diffusers/controlnet) for inference on a variety of tasks.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/controlnet.md" />

### Adapt a model to a new task
https://huggingface.co/docs/diffusers/main/training/adapt_a_model.md

# Adapt a model to a new task

Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task.

This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel).

## Configure UNet2DConditionModel parameters

A [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) and take a look at the number of `in_channels`:

```py
from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True)
pipeline.unet.config["in_channels"]
4
```

Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting):

```py
from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True)
pipeline.unet.config["in_channels"]
9
```

To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9.

Initialize a [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now.

```py
from diffusers import AutoModel

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
unet = AutoModel.from_pretrained(
    model_id,
    subfolder="unet",
    in_channels=9,
    low_cpu_mem_usage=False,
    ignore_mismatched_sizes=True,
    use_safetensors=True,
)
```

The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/adapt_a_model.md" />

### Hybrid Inference
https://huggingface.co/docs/diffusers/main/hybrid_inference/overview.md

# Hybrid Inference

**Empowering local AI builders with Hybrid Inference**


> [!TIP]
> Hybrid Inference is an [experimental feature](https://huggingface.co/blog/remote_vae).
> Feedback can be provided [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml).



## Why use Hybrid Inference?

Hybrid Inference offers a fast and simple way to offload local generation requirements.

- 🚀 **Reduced Requirements:** Access powerful models without expensive hardware.
- 💎 **Without Compromise:** Achieve the highest quality without sacrificing performance.
- 💰 **Cost Effective:** It's free! 🤑
- 🎯 **Diverse Use Cases:** Fully compatible with Diffusers 🧨 and the wider community.
- 🔧 **Developer-Friendly:** Simple requests, fast responses.

---

## Available Models

* **VAE Decode 🖼️:** Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
* **VAE Encode 🔢:** Efficiently encode images into latent representations for generation and training.
* **Text Encoders 📃 (coming soon):** Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.

---

## Integrations

* **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference.
* **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference.

## Changelog

- March 10 2025: Added VAE encode
- March 2 2025: Initial release with VAE decoding

## Contents

The documentation is organized into three sections:

* **VAE Decode** Learn the basics of how to use VAE Decode with Hybrid Inference.
* **VAE Encode** Learn the basics of how to use VAE Encode with Hybrid Inference.
* **API Reference** Dive into task-specific settings and parameters.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/hybrid_inference/overview.md" />

### Getting Started: VAE Encode with Hybrid Inference
https://huggingface.co/docs/diffusers/main/hybrid_inference/vae_encode.md

# Getting Started: VAE Encode with Hybrid Inference

VAE encode is used for training, image-to-image and image-to-video - turning into images or videos into latent representations.

## Memory

These tables demonstrate the VRAM requirements for VAE encode with SD v1 and SD XL on different GPUs.

For the majority of these GPUs the memory usage % dictates other models (text encoders, UNet/Transformer) must be offloaded, or tiled encoding has to be used which increases time taken and impacts quality.

<details><summary>SD v1.5</summary>

| GPU                           | Resolution   |   Time (seconds) |   Memory (%) |   Tiled Time (secs) |   Tiled Memory (%) |
|:------------------------------|:-------------|-----------------:|-------------:|--------------------:|-------------------:|
| NVIDIA GeForce RTX 4090       | 512x512      |            0.015 |      3.51901 |               0.015 |            3.51901 |
| NVIDIA GeForce RTX 4090       | 256x256      |            0.004 |      1.3154  |               0.005 |            1.3154  |
| NVIDIA GeForce RTX 4090       | 2048x2048    |            0.402 |     47.1852  |               0.496 |            3.51901 |
| NVIDIA GeForce RTX 4090       | 1024x1024    |            0.078 |     12.2658  |               0.094 |            3.51901 |
| NVIDIA GeForce RTX 4080 SUPER | 512x512      |            0.023 |      5.30105 |               0.023 |            5.30105 |
| NVIDIA GeForce RTX 4080 SUPER | 256x256      |            0.006 |      1.98152 |               0.006 |            1.98152 |
| NVIDIA GeForce RTX 4080 SUPER | 2048x2048    |            0.574 |     71.08    |               0.656 |            5.30105 |
| NVIDIA GeForce RTX 4080 SUPER | 1024x1024    |            0.111 |     18.4772  |               0.14  |            5.30105 |
| NVIDIA GeForce RTX 3090       | 512x512      |            0.032 |      3.52782 |               0.032 |            3.52782 |
| NVIDIA GeForce RTX 3090       | 256x256      |            0.01  |      1.31869 |               0.009 |            1.31869 |
| NVIDIA GeForce RTX 3090       | 2048x2048    |            0.742 |     47.3033  |               0.954 |            3.52782 |
| NVIDIA GeForce RTX 3090       | 1024x1024    |            0.136 |     12.2965  |               0.207 |            3.52782 |
| NVIDIA GeForce RTX 3080       | 512x512      |            0.036 |      8.51761 |               0.036 |            8.51761 |
| NVIDIA GeForce RTX 3080       | 256x256      |            0.01  |      3.18387 |               0.01  |            3.18387 |
| NVIDIA GeForce RTX 3080       | 2048x2048    |            0.863 |     86.7424  |               1.191 |            8.51761 |
| NVIDIA GeForce RTX 3080       | 1024x1024    |            0.157 |     29.6888  |               0.227 |            8.51761 |
| NVIDIA GeForce RTX 3070       | 512x512      |            0.051 |     10.6941  |               0.051 |           10.6941  |
| NVIDIA GeForce RTX 3070       | 256x256      |            0.015 |      3.99743 |               0.015 |            3.99743 |
| NVIDIA GeForce RTX 3070       | 2048x2048    |            1.217 |     96.054   |               1.482 |           10.6941  |
| NVIDIA GeForce RTX 3070       | 1024x1024    |            0.223 |     37.2751  |               0.327 |           10.6941  |


</details>

<details><summary>SDXL</summary>

| GPU                           | Resolution   |   Time (seconds) |   Memory Consumed (%) |   Tiled Time (seconds) |   Tiled Memory (%) |
|:------------------------------|:-------------|-----------------:|----------------------:|-----------------------:|-------------------:|
| NVIDIA GeForce RTX 4090       | 512x512      |            0.029 |               4.95707 |                  0.029 |            4.95707 |
| NVIDIA GeForce RTX 4090       | 256x256      |            0.007 |               2.29666 |                  0.007 |            2.29666 |
| NVIDIA GeForce RTX 4090       | 2048x2048    |            0.873 |              66.3452  |                  0.863 |           15.5649  |
| NVIDIA GeForce RTX 4090       | 1024x1024    |            0.142 |              15.5479  |                  0.143 |           15.5479  |
| NVIDIA GeForce RTX 4080 SUPER | 512x512      |            0.044 |               7.46735 |                  0.044 |            7.46735 |
| NVIDIA GeForce RTX 4080 SUPER | 256x256      |            0.01  |               3.4597  |                  0.01  |            3.4597  |
| NVIDIA GeForce RTX 4080 SUPER | 2048x2048    |            1.317 |              87.1615  |                  1.291 |           23.447   |
| NVIDIA GeForce RTX 4080 SUPER | 1024x1024    |            0.213 |              23.4215  |                  0.214 |           23.4215  |
| NVIDIA GeForce RTX 3090       | 512x512      |            0.058 |               5.65638 |                  0.058 |            5.65638 |
| NVIDIA GeForce RTX 3090       | 256x256      |            0.016 |               2.45081 |                  0.016 |            2.45081 |
| NVIDIA GeForce RTX 3090       | 2048x2048    |            1.755 |              77.8239  |                  1.614 |           18.4193  |
| NVIDIA GeForce RTX 3090       | 1024x1024    |            0.265 |              18.4023  |                  0.265 |           18.4023  |
| NVIDIA GeForce RTX 3080       | 512x512      |            0.064 |              13.6568  |                  0.064 |           13.6568  |
| NVIDIA GeForce RTX 3080       | 256x256      |            0.018 |               5.91728 |                  0.018 |            5.91728 |
| NVIDIA GeForce RTX 3080       | 2048x2048    |          OOM     |             OOM       |                  1.866 |           44.4717  |
| NVIDIA GeForce RTX 3080       | 1024x1024    |            0.302 |              44.4308  |                  0.302 |           44.4308  |
| NVIDIA GeForce RTX 3070       | 512x512      |            0.093 |              17.1465  |                  0.093 |           17.1465  |
| NVIDIA GeForce RTX 3070       | 256x256      |            0.025 |               7.42931 |                  0.026 |            7.42931 |
| NVIDIA GeForce RTX 3070       | 2048x2048    |          OOM     |             OOM       |                  2.674 |           55.8355  |
| NVIDIA GeForce RTX 3070       | 1024x1024    |            0.443 |              55.7841  |                  0.443 |           55.7841  |

</details>

## Available VAEs

|   | **Endpoint** | **Model** |
|:-:|:-----------:|:--------:|
| **Stable Diffusion v1** | [https://qc6479g0aac6qwy9.us-east-1.aws.endpoints.huggingface.cloud](https://qc6479g0aac6qwy9.us-east-1.aws.endpoints.huggingface.cloud) | [`stabilityai/sd-vae-ft-mse`](https://hf.co/stabilityai/sd-vae-ft-mse) |
| **Stable Diffusion XL** | [https://xjqqhmyn62rog84g.us-east-1.aws.endpoints.huggingface.cloud](https://xjqqhmyn62rog84g.us-east-1.aws.endpoints.huggingface.cloud) | [`madebyollin/sdxl-vae-fp16-fix`](https://hf.co/madebyollin/sdxl-vae-fp16-fix) |
| **Flux** | [https://ptccx55jz97f9zgo.us-east-1.aws.endpoints.huggingface.cloud](https://ptccx55jz97f9zgo.us-east-1.aws.endpoints.huggingface.cloud) | [`black-forest-labs/FLUX.1-schnell`](https://hf.co/black-forest-labs/FLUX.1-schnell) |


> [!TIP]
> Model support can be requested [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml).


## Code

> [!TIP]
> Install `diffusers` from `main` to run the code: `pip install git+https://github.com/huggingface/diffusers@main`


A helper method simplifies interacting with Hybrid Inference.

```python
from diffusers.utils.remote_utils import remote_encode
```

### Basic example

Let's encode an image, then decode it to demonstrate.

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"/>
</figure>

<details><summary>Code</summary>

```python
from diffusers.utils import load_image
from diffusers.utils.remote_utils import remote_decode

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg?download=true")

latent = remote_encode(
    endpoint="https://ptccx55jz97f9zgo.us-east-1.aws.endpoints.huggingface.cloud/",
    scaling_factor=0.3611,
    shift_factor=0.1159,
)

decoded = remote_decode(
    endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    scaling_factor=0.3611,
    shift_factor=0.1159,
)
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/decoded.png"/>
</figure>


### Generation

Now let's look at a generation example, we'll encode the image, generate then remotely decode too!

<details><summary>Code</summary>

```python
import torch
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers.utils import load_image
from diffusers.utils.remote_utils import remote_decode, remote_encode

pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    variant="fp16",
    vae=None,
).to("cuda")

init_image = load_image(
    "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
)
init_image = init_image.resize((768, 512))

init_latent = remote_encode(
    endpoint="https://qc6479g0aac6qwy9.us-east-1.aws.endpoints.huggingface.cloud/",
    image=init_image,
    scaling_factor=0.18215,
)

prompt = "A fantasy landscape, trending on artstation"
latent = pipe(
    prompt=prompt,
    image=init_latent,
    strength=0.75,
    output_type="latent",
).images

image = remote_decode(
    endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    scaling_factor=0.18215,
)
image.save("fantasy_landscape.jpg")
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/fantasy_landscape.png"/>
</figure>

## Integrations

* **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference.
* **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/hybrid_inference/vae_encode.md" />

### Getting Started: VAE Decode with Hybrid Inference
https://huggingface.co/docs/diffusers/main/hybrid_inference/vae_decode.md

# Getting Started: VAE Decode with Hybrid Inference

VAE decode is an essential component of diffusion models - turning latent representations into images or videos.

## Memory

These tables demonstrate the VRAM requirements for VAE decode with SD v1 and SD XL on different GPUs.

For the majority of these GPUs the memory usage % dictates other models (text encoders, UNet/Transformer) must be offloaded, or tiled decoding has to be used which increases time taken and impacts quality.

<details><summary>SD v1.5</summary>

| GPU | Resolution | Time (seconds) | Memory (%) | Tiled Time (secs) | Tiled Memory (%) |
| --- | --- | --- | --- | --- | --- |
| NVIDIA GeForce RTX 4090 | 512x512 | 0.031 | 5.60% | 0.031 (0%) | 5.60% |
| NVIDIA GeForce RTX 4090 | 1024x1024 | 0.148 | 20.00% | 0.301 (+103%) | 5.60% |
| NVIDIA GeForce RTX 4080 | 512x512 | 0.05 | 8.40% | 0.050 (0%) | 8.40% |
| NVIDIA GeForce RTX 4080 | 1024x1024 | 0.224 | 30.00% | 0.356 (+59%) | 8.40% |
| NVIDIA GeForce RTX 4070 Ti | 512x512 | 0.066 | 11.30% | 0.066 (0%) | 11.30% |
| NVIDIA GeForce RTX 4070 Ti | 1024x1024 | 0.284 | 40.50% | 0.454 (+60%) | 11.40% |
| NVIDIA GeForce RTX 3090 | 512x512 | 0.062 | 5.20% | 0.062 (0%) | 5.20% |
| NVIDIA GeForce RTX 3090 | 1024x1024 | 0.253 | 18.50% | 0.464 (+83%) | 5.20% |
| NVIDIA GeForce RTX 3080 | 512x512 | 0.07 | 12.80% | 0.070 (0%) | 12.80% |
| NVIDIA GeForce RTX 3080 | 1024x1024 | 0.286 | 45.30% | 0.466 (+63%) | 12.90% |
| NVIDIA GeForce RTX 3070 | 512x512 | 0.102 | 15.90% | 0.102 (0%) | 15.90% |
| NVIDIA GeForce RTX 3070 | 1024x1024 | 0.421 | 56.30% | 0.746 (+77%) | 16.00% |

</details>

<details><summary>SDXL</summary>

| GPU | Resolution | Time (seconds) | Memory Consumed (%) | Tiled Time (seconds) | Tiled Memory (%) |
| --- | --- | --- | --- | --- | --- |
| NVIDIA GeForce RTX 4090 | 512x512 | 0.057 | 10.00% | 0.057 (0%) | 10.00% |
| NVIDIA GeForce RTX 4090 | 1024x1024 | 0.256 | 35.50% | 0.257 (+0.4%) | 35.50% |
| NVIDIA GeForce RTX 4080 | 512x512 | 0.092 | 15.00% | 0.092 (0%) | 15.00% |
| NVIDIA GeForce RTX 4080 | 1024x1024 | 0.406 | 53.30% | 0.406 (0%) | 53.30% |
| NVIDIA GeForce RTX 4070 Ti | 512x512 | 0.121 | 20.20% | 0.120 (-0.8%) | 20.20% |
| NVIDIA GeForce RTX 4070 Ti | 1024x1024 | 0.519 | 72.00% | 0.519 (0%) | 72.00% |
| NVIDIA GeForce RTX 3090 | 512x512 | 0.107 | 10.50% | 0.107 (0%) | 10.50% |
| NVIDIA GeForce RTX 3090 | 1024x1024 | 0.459 | 38.00% | 0.460 (+0.2%) | 38.00% |
| NVIDIA GeForce RTX 3080 | 512x512 | 0.121 | 25.60% | 0.121 (0%) | 25.60% |
| NVIDIA GeForce RTX 3080 | 1024x1024 | 0.524 | 93.00% | 0.524 (0%) | 93.00% |
| NVIDIA GeForce RTX 3070 | 512x512 | 0.183 | 31.80% | 0.183 (0%) | 31.80% |
| NVIDIA GeForce RTX 3070 | 1024x1024 | 0.794 | 96.40% | 0.794 (0%) | 96.40% |

</details>

## Available VAEs

|   | **Endpoint** | **Model** |
|:-:|:-----------:|:--------:|
| **Stable Diffusion v1** | [https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud](https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud) | [`stabilityai/sd-vae-ft-mse`](https://hf.co/stabilityai/sd-vae-ft-mse) |
| **Stable Diffusion XL** | [https://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloud](https://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloud) | [`madebyollin/sdxl-vae-fp16-fix`](https://hf.co/madebyollin/sdxl-vae-fp16-fix) |
| **Flux** | [https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud](https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud) | [`black-forest-labs/FLUX.1-schnell`](https://hf.co/black-forest-labs/FLUX.1-schnell) |
| **HunyuanVideo** | [https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud](https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud) | [`hunyuanvideo-community/HunyuanVideo`](https://hf.co/hunyuanvideo-community/HunyuanVideo) |


> [!TIP]
> Model support can be requested [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml).


## Code

> [!TIP]
> Install `diffusers` from `main` to run the code: `pip install git+https://github.com/huggingface/diffusers@main`


A helper method simplifies interacting with Hybrid Inference.

```python
from diffusers.utils.remote_utils import remote_decode
```

### Basic example

Here, we show how to use the remote VAE on random tensors.

<details><summary>Code</summary>

```python
image = remote_decode(
    endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 4, 64, 64], dtype=torch.float16),
    scaling_factor=0.18215,
)
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/output.png"/>
</figure>

Usage for Flux is slightly different. Flux latents are packed so we need to send the `height` and `width`.

<details><summary>Code</summary>

```python
image = remote_decode(
    endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 4096, 64], dtype=torch.float16),
    height=1024,
    width=1024,
    scaling_factor=0.3611,
    shift_factor=0.1159,
)
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/flux_random_latent.png"/>
</figure>

Finally, an example for HunyuanVideo.

<details><summary>Code</summary>

```python
video = remote_decode(
    endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=torch.randn([1, 16, 3, 40, 64], dtype=torch.float16),
    output_type="mp4",
)
with open("video.mp4", "wb") as f:
    f.write(video)
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
   <video
      alt="queue.mp4"
      autoplay loop autobuffer muted playsinline
    >
    <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/video_1.mp4" type="video/mp4">
  </video>
</figure>


### Generation

But we want to use the VAE on an actual pipeline to get an actual image, not random noise. The example below shows how to do it with SD v1.5. 

<details><summary>Code</summary>

```python
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    variant="fp16",
    vae=None,
).to("cuda")

prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"

latent = pipe(
    prompt=prompt,
    output_type="latent",
).images
image = remote_decode(
    endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    scaling_factor=0.18215,
)
image.save("test.jpg")
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/test.jpg"/>
</figure>

Here’s another example with Flux.

<details><summary>Code</summary>

```python
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell",
    torch_dtype=torch.bfloat16,
    vae=None,
).to("cuda")

prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious"

latent = pipe(
    prompt=prompt,
    guidance_scale=0.0,
    num_inference_steps=4,
    output_type="latent",
).images
image = remote_decode(
    endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    height=1024,
    width=1024,
    scaling_factor=0.3611,
    shift_factor=0.1159,
)
image.save("test.jpg")
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/test_1.jpg"/>
</figure>

Here’s an example with HunyuanVideo.

<details><summary>Code</summary>

```python
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel

model_id = "hunyuanvideo-community/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
    model_id, subfolder="transformer", torch_dtype=torch.bfloat16
)
pipe = HunyuanVideoPipeline.from_pretrained(
    model_id, transformer=transformer, vae=None, torch_dtype=torch.float16
).to("cuda")

latent = pipe(
    prompt="A cat walks on the grass, realistic",
    height=320,
    width=512,
    num_frames=61,
    num_inference_steps=30,
    output_type="latent",
).frames

video = remote_decode(
    endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/",
    tensor=latent,
    output_type="mp4",
)

if isinstance(video, bytes):
    with open("video.mp4", "wb") as f:
        f.write(video)
```

</details>

<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
   <video
      alt="queue.mp4"
      autoplay loop autobuffer muted playsinline
    >
    <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/video.mp4" type="video/mp4">
  </video>
</figure>


### Queueing

One of the great benefits of using a remote VAE is that we can queue multiple generation requests. While the current latent is being processed for decoding, we can already queue another one. This helps improve concurrency. 


<details><summary>Code</summary>

```python
import queue
import threading
from IPython.display import display
from diffusers import StableDiffusionPipeline

def decode_worker(q: queue.Queue):
    while True:
        item = q.get()
        if item is None:
            break
        image = remote_decode(
            endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/",
            tensor=item,
            scaling_factor=0.18215,
        )
        display(image)
        q.task_done()

q = queue.Queue()
thread = threading.Thread(target=decode_worker, args=(q,), daemon=True)
thread.start()

def decode(latent: torch.Tensor):
    q.put(latent)

prompts = [
    "Blueberry ice cream, in a stylish modern glass , ice cubes, nuts, mint leaves, splashing milk cream, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious",
    "Lemonade in a glass, mint leaves, in an aqua and white background, flowers, ice cubes, halo, fluid motion, dynamic movement, soft lighting, digital painting, rule of thirds composition, Art by Greg rutkowski, Coby whitmore",
    "Comic book art, beautiful, vintage, pastel neon colors, extremely detailed pupils, delicate features, light on face, slight smile, Artgerm, Mary Blair, Edmund Dulac, long dark locks, bangs, glowing, fashionable style, fairytale ambience, hot pink.",
    "Masterpiece, vanilla cone ice cream garnished with chocolate syrup, crushed nuts, choco flakes, in a brown background, gold, cinematic lighting, Art by WLOP",
    "A bowl of milk, falling cornflakes, berries, blueberries, in a white background, soft lighting, intricate details, rule of thirds, octane render, volumetric lighting",
    "Cold Coffee with cream, crushed almonds, in a glass, choco flakes, ice cubes, wet, in a wooden background, cinematic lighting, hyper realistic painting, art by Carne Griffiths, octane render, volumetric lighting, fluid motion, dynamic movement, muted colors,",
]

pipe = StableDiffusionPipeline.from_pretrained(
    "Lykon/dreamshaper-8",
    torch_dtype=torch.float16,
    vae=None,
).to("cuda")

pipe.unet = pipe.unet.to(memory_format=torch.channels_last)
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)

_ = pipe(
    prompt=prompts[0],
    output_type="latent",
)

for prompt in prompts:
    latent = pipe(
        prompt=prompt,
        output_type="latent",
    ).images
    decode(latent)

q.put(None)
thread.join()
```

</details>


<figure class="image flex flex-col items-center justify-center text-center m-0 w-full">
   <video
      alt="queue.mp4"
      autoplay loop autobuffer muted playsinline
    >
    <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/remote_vae/queue.mp4" type="video/mp4">
  </video>
</figure>

## Integrations

* **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference.
* **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/hybrid_inference/vae_decode.md" />

### Hybrid Inference API Reference
https://huggingface.co/docs/diffusers/main/hybrid_inference/api_reference.md

# Hybrid Inference API Reference

## Remote Decode[[diffusers.utils.remote_decode]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.remote_decode</name><anchor>diffusers.utils.remote_decode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/remote_utils.py#L188</source><parameters>[{"name": "endpoint", "val": ": str"}, {"name": "tensor", "val": ": torch.Tensor"}, {"name": "processor", "val": ": typing.Union[ForwardRef('VaeImageProcessor'), ForwardRef('VideoProcessor'), NoneType] = None"}, {"name": "do_scaling", "val": ": bool = True"}, {"name": "scaling_factor", "val": ": typing.Optional[float] = None"}, {"name": "shift_factor", "val": ": typing.Optional[float] = None"}, {"name": "output_type", "val": ": typing.Literal['mp4', 'pil', 'pt'] = 'pil'"}, {"name": "return_type", "val": ": typing.Literal['mp4', 'pil', 'pt'] = 'pil'"}, {"name": "image_format", "val": ": typing.Literal['png', 'jpg'] = 'jpg'"}, {"name": "partial_postprocess", "val": ": bool = False"}, {"name": "input_tensor_type", "val": ": typing.Literal['binary'] = 'binary'"}, {"name": "output_tensor_type", "val": ": typing.Literal['binary'] = 'binary'"}, {"name": "height", "val": ": typing.Optional[int] = None"}, {"name": "width", "val": ": typing.Optional[int] = None"}]</parameters><paramsdesc>- **endpoint** (`str`) --
  Endpoint for Remote Decode.
- **tensor** (`torch.Tensor`) --
  Tensor to be decoded.
- **processor** (`VaeImageProcessor` or `VideoProcessor`, *optional*) --
  Used with `return_type="pt"`, and `return_type="pil"` for Video models.
- **do_scaling** (`bool`, default `True`, *optional*) --
  **DEPRECATED**. **pass `scaling_factor`/`shift_factor` instead.** **still set
  do_scaling=None/do_scaling=False for no scaling until option is removed** When `True` scaling e.g. `latents
  / self.vae.config.scaling_factor` is applied remotely. If `False`, input must be passed with scaling
  applied.
- **scaling_factor** (`float`, *optional*) --
  Scaling is applied when passed e.g. [`latents /
  self.vae.config.scaling_factor`](https://github.com/huggingface/diffusers/blob/7007febae5cff000d4df9059d9cf35133e8b2ca9/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L1083C37-L1083C77).
  - SD v1: 0.18215
  - SD XL: 0.13025
  - Flux: 0.3611
  If `None`, input must be passed with scaling applied.
- **shift_factor** (`float`, *optional*) --
  Shift is applied when passed e.g. `latents + self.vae.config.shift_factor`.
  - Flux: 0.1159
  If `None`, input must be passed with scaling applied.
- **output_type** (`"mp4"` or `"pil"` or `"pt", default `"pil") --
  **Endpoint** output type. Subject to change. Report feedback on preferred type.

  `"mp4": Supported by video models. Endpoint returns `bytes` of video. `"pil"`: Supported by image and video
  models.
  Image models: Endpoint returns `bytes` of an image in `image_format`. Video models: Endpoint returns
  `torch.Tensor` with partial `postprocessing` applied.
  Requires `processor` as a flag (any `None` value will work).
  `"pt"`: Support by image and video models. Endpoint returns `torch.Tensor`.
  With `partial_postprocess=True` the tensor is postprocessed `uint8` image tensor.

  Recommendations:
  `"pt"` with `partial_postprocess=True` is the smallest transfer for full quality. `"pt"` with
  `partial_postprocess=False` is the most compatible with third party code. `"pil"` with
  `image_format="jpg"` is the smallest transfer overall.

- **return_type** (`"mp4"` or `"pil"` or `"pt", default `"pil") --
  **Function** return type.

  `"mp4": Function returns `bytes` of video. `"pil"`: Function returns `PIL.Image.Image`.
  With `output_type="pil" no further processing is applied. With `output_type="pt" a `PIL.Image.Image` is
  created.
  `partial_postprocess=False` `processor` is required. `partial_postprocess=True` `processor` is
  **not** required.
  `"pt"`: Function returns `torch.Tensor`.
  `processor` is **not** required. `partial_postprocess=False` tensor is `float16` or `bfloat16`, without
  denormalization. `partial_postprocess=True` tensor is `uint8`, denormalized.

- **image_format** (`"png"` or `"jpg"`, default `jpg`) --
  Used with `output_type="pil"`. Endpoint returns `jpg` or `png`.

- **partial_postprocess** (`bool`, default `False`) --
  Used with `output_type="pt"`. `partial_postprocess=False` tensor is `float16` or `bfloat16`, without
  denormalization. `partial_postprocess=True` tensor is `uint8`, denormalized.

- **input_tensor_type** (`"binary"`, default `"binary"`) --
  Tensor transfer type.

- **output_tensor_type** (`"binary"`, default `"binary"`) --
  Tensor transfer type.

- **height** (`int`, **optional**) --
  Required for `"packed"` latents.

- **width** (`int`, **optional**) --
  Required for `"packed"` latents.</paramsdesc><paramgroups>0</paramgroups><retdesc>output (`Image.Image` or `List[Image.Image]` or `bytes` or `torch.Tensor`).</retdesc></docstring>

Hugging Face Hybrid Inference that allow running VAE decode remotely.






</div>

## Remote Encode[[diffusers.utils.remote_utils.remote_encode]]

<div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8">


<docstring><name>diffusers.utils.remote_utils.remote_encode</name><anchor>diffusers.utils.remote_utils.remote_encode</anchor><source>https://github.com/huggingface/diffusers/blob/main/src/diffusers/utils/remote_utils.py#L380</source><parameters>[{"name": "endpoint", "val": ": str"}, {"name": "image", "val": ": typing.Union[ForwardRef('torch.Tensor'), PIL.Image.Image]"}, {"name": "scaling_factor", "val": ": typing.Optional[float] = None"}, {"name": "shift_factor", "val": ": typing.Optional[float] = None"}]</parameters><paramsdesc>- **endpoint** (`str`) --
  Endpoint for Remote Decode.
- **image** (`torch.Tensor` or `PIL.Image.Image`) --
  Image to be encoded.
- **scaling_factor** (`float`, *optional*) --
  Scaling is applied when passed e.g. `latents * self.vae.config.scaling_factor`.
  - SD v1: 0.18215
  - SD XL: 0.13025
  - Flux: 0.3611
  If `None`, input must be passed with scaling applied.
- **shift_factor** (`float`, *optional*) --
  Shift is applied when passed e.g. `latents - self.vae.config.shift_factor`.
  - Flux: 0.1159
  If `None`, input must be passed with scaling applied.</paramsdesc><paramgroups>0</paramgroups><retdesc>output (`torch.Tensor`).</retdesc></docstring>

Hugging Face Hybrid Inference that allow running VAE encode remotely.






</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/hybrid_inference/api_reference.md" />

### Getting started
https://huggingface.co/docs/diffusers/main/quantization/overview.md

# Getting started

Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.

Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [PipelineQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.PipelineQuantizationConfig) class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.

## Pipeline-level quantization

There are two ways to use [PipelineQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.PipelineQuantizationConfig) depending on how much customization you want to apply to the quantization configuration. 

- for basic use cases, define the `quant_backend`, `quant_kwargs`, and `components_to_quantize` arguments
- for granular quantization control, define a `quant_mapping` that provides the quantization configuration for individual model components

### Basic quantization

Initialize [PipelineQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.PipelineQuantizationConfig) with the following parameters.

- `quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`.
- `quant_kwargs` specifies the quantization arguments to use.

> [!TIP]
> These `quant_kwargs` arguments are different for each backend. Refer to the [Quantization API](../api/quantization) docs to view the arguments for each backend.

- `components_to_quantize` specifies which component(s) of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [FluxPipeline](/docs/diffusers/main/en/api/pipelines/flux#diffusers.FluxPipeline). The example below quantizes the T5 text encoder in [FluxPipeline](/docs/diffusers/main/en/api/pipelines/flux#diffusers.FluxPipeline) while keeping the CLIP model intact.

   `components_to_quantize` accepts either a list for multiple models or a string for a single model.

The example below loads the bitsandbytes backend with the following arguments from [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig), `load_in_4bit`, `bnb_4bit_quant_type`, and `bnb_4bit_compute_dtype`.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder_2"],
)
```

Pass the `pipeline_quant_config` to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to quantize the pipeline.

```py
pipe = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

image = pipe("photo of a cute dog").images[0]
```


### Advanced quantization

The `quant_mapping` argument provides more options for how to quantize each individual component in a pipeline, like combining different quantization backends.

Initialize [PipelineQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.PipelineQuantizationConfig) and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.

The example below uses two quantization backends, [QuantoConfig](/docs/diffusers/main/en/api/quantization#diffusers.QuantoConfig) and [transformers.BitsAndBytesConfig](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#transformers.BitsAndBytesConfig), for the transformer and text encoder.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers.quantization_config import QuantoConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={
        "transformer": QuantoConfig(weights_dtype="int8"),
        "text_encoder_2": TransformersBitsAndBytesConfig(
            load_in_4bit=True, compute_dtype=torch.bfloat16
        ),
    }
)
```

There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [transformers.BitsAndBytesConfig](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#transformers.BitsAndBytesConfig) for components that come from Transformers. For example, `text_encoder_2` in [FluxPipeline](/docs/diffusers/main/en/api/pipelines/flux#diffusers.FluxPipeline) is a [T5EncoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5EncoderModel) from Transformers so you need to use [transformers.BitsAndBytesConfig](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#transformers.BitsAndBytesConfig) instead of [diffusers.BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig).

> [!TIP]
> Use the [basic quantization](#basic-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={
        "transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16),
        "text_encoder_2": TransformersBitsAndBytesConfig(
            load_in_4bit=True, compute_dtype=torch.bfloat16
        ),
    }
)
```

Pass the `pipeline_quant_config` to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to quantize the pipeline.

```py
pipe = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

image = pipe("photo of a cute dog").images[0]
```

## Resources

Check out the resources below to learn more about quantization.

- If you are new to quantization, we recommend checking out the following beginner-friendly courses in collaboration with DeepLearning.AI.

    - [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
    - [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/)

- Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) if you're interested in adding a new quantization method.

- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends.

- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/overview.md" />

### bitsandbytes
https://huggingface.co/docs/diffusers/main/quantization/bitsandbytes.md

# bitsandbytes

[bitsandbytes](https://huggingface.co/docs/bitsandbytes/index) is the easiest option for quantizing a model to 8 and 4-bit. 8-bit quantization multiplies outliers in fp16 with non-outliers in int8, converts the non-outlier values back to fp16, and then adds them together to return the weights in fp16. This reduces the degradative effect outlier values have on a model's performance.

4-bit quantization compresses a model even further, and it is commonly used with [QLoRA](https://hf.co/papers/2305.14314) to finetune quantized LLMs.

This guide demonstrates how quantization can enable running
[FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
on less than 16GB of VRAM and even on a free Google
Colab instance.

![comparison image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/quant-bnb/comparison.png)

To use bitsandbytes, make sure you have the following libraries installed:

```bash
pip install diffusers transformers accelerate bitsandbytes -U
```

Now you can quantize a model by passing a [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig) to [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained). This works for any model in any modality, as long as it supports loading with [Accelerate](https://hf.co/docs/accelerate/index) and contains `torch.nn.Linear` layers.

<hfoptions id="bnb">
<hfoption id="8-bit">

Quantizing a model in 8-bit halves the memory-usage:

bitsandbytes is supported in both Transformers and Diffusers, so you can quantize both the
[FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel) and [T5EncoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5EncoderModel).

For Ada and higher-series GPUs. we recommend changing `torch_dtype` to `torch.bfloat16`.

> [!TIP]
> The `CLIPTextModel` and [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL) aren't quantized because they're already small in size and because [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL) only has a few `torch.nn.Linear` layers.

```py
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
import torch
from diffusers import AutoModel
from transformers import T5EncoderModel

quant_config = TransformersBitsAndBytesConfig(load_in_8bit=True,)

text_encoder_2_8bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True,)

transformer_8bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
```

By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter.

```diff
transformer_8bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
+   torch_dtype=torch.float32,
)
```

Let's generate an image using our quantized models.

Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the
CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.

```py
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=transformer_8bit,
    text_encoder_2=text_encoder_2_8bit,
    torch_dtype=torch.float16,
    device_map="auto",
)

pipe_kwargs = {
    "prompt": "A cat holding a sign that says hello world",
    "height": 1024,
    "width": 1024,
    "guidance_scale": 3.5,
    "num_inference_steps": 50,
    "max_sequence_length": 512,
}

image = pipe(**pipe_kwargs, generator=torch.manual_seed(0),).images[0]
```

<div class="flex justify-center">
   <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/quant-bnb/8bit.png"/>
</div>

When there is enough memory, you can also directly move the pipeline to the GPU with `.to("cuda")` and apply [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) to optimize GPU memory usage.

Once a model is quantized, you can push the model to the Hub with the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method. The quantization `config.json` file is pushed first, followed by the quantized model weights. You can also save the serialized 8-bit models locally with [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).

</hfoption>
<hfoption id="4-bit">

Quantizing a model in 4-bit reduces your memory-usage by 4x:

bitsandbytes is supported in both Transformers and Diffusers, so you can can quantize both the
[FluxTransformer2DModel](/docs/diffusers/main/en/api/models/flux_transformer#diffusers.FluxTransformer2DModel) and [T5EncoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5EncoderModel).

For Ada and higher-series GPUs. we recommend changing `torch_dtype` to `torch.bfloat16`.

> [!TIP]
> The `CLIPTextModel` and [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL) aren't quantized because they're already small in size and because [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL) only has a few `torch.nn.Linear` layers.

```py
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
import torch
from diffusers import AutoModel
from transformers import T5EncoderModel

quant_config = TransformersBitsAndBytesConfig(load_in_4bit=True,)

text_encoder_2_4bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(load_in_4bit=True,)

transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
```

By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter.

```diff
transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
+   torch_dtype=torch.float32,
)
```

Let's generate an image using our quantized models.

Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.

```py
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=transformer_4bit,
    text_encoder_2=text_encoder_2_4bit,
    torch_dtype=torch.float16,
    device_map="auto",
)

pipe_kwargs = {
    "prompt": "A cat holding a sign that says hello world",
    "height": 1024,
    "width": 1024,
    "guidance_scale": 3.5,
    "num_inference_steps": 50,
    "max_sequence_length": 512,
}

image = pipe(**pipe_kwargs, generator=torch.manual_seed(0),).images[0]
```

<div class="flex justify-center">
   <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/quant-bnb/4bit.png"/>
</div>

When there is enough memory, you can also directly move the pipeline to the GPU with `.to("cuda")` and apply [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) to optimize GPU memory usage.

Once a model is quantized, you can push the model to the Hub with the [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method. The quantization `config.json` file is pushed first, followed by the quantized model weights. You can also save the serialized 4-bit models locally with [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained).

</hfoption>
</hfoptions>

> [!WARNING]
> Training with 8-bit and 4-bit weights are only supported for training *extra* parameters.

Check your memory footprint with the `get_memory_footprint` method:

```py
print(model.get_memory_footprint())
```

Note that this only tells you the memory footprint of the model params and does _not_ estimate the inference memory requirements.

Quantized models can be loaded from the [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) method without needing to specify the `quantization_config` parameters:

```py
from diffusers import AutoModel, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_4bit=True)

model_4bit = AutoModel.from_pretrained(
    "hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="transformer"
)
```

## 8-bit (LLM.int8() algorithm)

> [!TIP]
> Learn more about the details of 8-bit quantization in this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration)!

This section explores some of the specific features of 8-bit models, such as outlier thresholds and skipping module conversion.

### Outlier threshold

An "outlier" is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning).

To find the best threshold for your model, we recommend experimenting with the `llm_int8_threshold` parameter in [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig):

```py
from diffusers import AutoModel, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_8bit=True, llm_int8_threshold=10,
)

model_8bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quantization_config,
)
```

### Skip module conversion

For some models, you don't need to quantize every module to 8-bit which can actually cause instability. For example, for diffusion models like [Stable Diffusion 3](../api/pipelines/stable_diffusion/stable_diffusion_3), the `proj_out` module can be skipped using the `llm_int8_skip_modules` parameter in [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig):

```py
from diffusers import SD3Transformer2DModel, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_8bit=True, llm_int8_skip_modules=["proj_out"],
)

model_8bit = SD3Transformer2DModel.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    subfolder="transformer",
    quantization_config=quantization_config,
)
```


## 4-bit (QLoRA algorithm)

> [!TIP]
> Learn more about its details in this [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes).

This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization.


### Compute data type

To speedup computation, you can change the data type from float32 (the default value) to bf16 using the `bnb_4bit_compute_dtype` parameter in [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig):

```py
import torch
from diffusers import BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
```

### Normal Float 4 (NF4)

NF4 is a 4-bit data type from the [QLoRA](https://hf.co/papers/2305.14314) paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the `bnb_4bit_quant_type` parameter in the [BitsAndBytesConfig](/docs/diffusers/main/en/api/quantization#diffusers.BitsAndBytesConfig):

```py
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig

from diffusers import AutoModel
from transformers import T5EncoderModel

quant_config = TransformersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
)

text_encoder_2_4bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
)

transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
```

For inference, the `bnb_4bit_quant_type` does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the `bnb_4bit_compute_dtype` and `torch_dtype` values.

### Nested quantization

Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an additional 0.4 bits/parameter. 

```py
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig

from diffusers import AutoModel
from transformers import T5EncoderModel

quant_config = TransformersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
)

text_encoder_2_4bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
)

transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
```

## Dequantizing `bitsandbytes` models

Once quantized, you can dequantize a model to its original precision, but this might result in a small loss of quality. Make sure you have enough GPU RAM to fit the dequantized model. 

```python
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig

from diffusers import AutoModel
from transformers import T5EncoderModel

quant_config = TransformersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
)

text_encoder_2_4bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

quant_config = DiffusersBitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
)

transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)

text_encoder_2_4bit.dequantize()
transformer_4bit.dequantize()
```

## torch.compile

Speed up inference with `torch.compile`. Make sure you have the latest `bitsandbytes` installed and we also recommend installing [PyTorch nightly](https://pytorch.org/get-started/locally/).

<hfoptions id="bnb">
<hfoption id="8-bit">
```py
torch._dynamo.config.capture_dynamic_output_shape_ops = True

quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
transformer_4bit.compile(fullgraph=True)
```

</hfoption>
<hfoption id="4-bit">

```py
quant_config = DiffusersBitsAndBytesConfig(load_in_4bit=True)
transformer_4bit = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
)
transformer_4bit.compile(fullgraph=True)
```
</hfoption>
</hfoptions>

On an RTX 4090 with compilation, 4-bit Flux generation completed in 25.809 seconds versus 32.570 seconds without.

Check out the [benchmarking script](https://gist.github.com/sayakpaul/0db9d8eeeb3d2a0e5ed7cf0d9ca19b7d) for more details.

## Resources

* [End-to-end notebook showing Flux.1 Dev inference in a free-tier Colab](https://gist.github.com/sayakpaul/c76bd845b48759e11687ac550b99d8b4)
* [Training](https://github.com/huggingface/diffusers/blob/8c661ea586bf11cb2440da740dd3c4cf84679b85/examples/dreambooth/README_hidream.md#using-quantization)

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/bitsandbytes.md" />

### Quanto
https://huggingface.co/docs/diffusers/main/quantization/quanto.md

# Quanto

[Quanto](https://github.com/huggingface/optimum-quanto) is a PyTorch quantization backend for [Optimum](https://huggingface.co/docs/optimum/en/index). It has been designed with versatility and simplicity in mind:

- All features are available in eager mode (works with non-traceable models)
- Supports quantization aware training
- Quantized models are compatible with `torch.compile`
- Quantized models are Device agnostic (e.g CUDA,XPU,MPS,CPU)

In order to use the Quanto backend, you will first need to install `optimum-quanto>=0.2.6` and `accelerate`

```shell
pip install optimum-quanto accelerate
```

Now you can quantize a model by passing the `QuantoConfig` object to the `from_pretrained()` method. Although the Quanto library does allow quantizing `nn.Conv2d` and `nn.LayerNorm` modules, currently, Diffusers only supports quantizing the weights in the `nn.Linear` layers of a model. The following snippet demonstrates how to apply `float8` quantization with Quanto.   

```python
import torch
from diffusers import FluxTransformer2DModel, QuantoConfig

model_id = "black-forest-labs/FLUX.1-dev"
quantization_config = QuantoConfig(weights_dtype="float8")
transformer = FluxTransformer2DModel.from_pretrained(
      model_id,
      subfolder="transformer",
      quantization_config=quantization_config,
      torch_dtype=torch.bfloat16,
)

pipe = FluxPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch_dtype)
pipe.to("cuda")

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512
).images[0]
image.save("output.png")
```

## Skipping Quantization on specific modules

It is possible to skip applying quantization on certain modules using the `modules_to_not_convert` argument in the `QuantoConfig`. Please ensure that the modules passed in to this argument match the keys of the modules in the `state_dict`  

```python
import torch
from diffusers import FluxTransformer2DModel, QuantoConfig

model_id = "black-forest-labs/FLUX.1-dev"
quantization_config = QuantoConfig(weights_dtype="float8", modules_to_not_convert=["proj_out"])
transformer = FluxTransformer2DModel.from_pretrained(
      model_id,
      subfolder="transformer",
      quantization_config=quantization_config,
      torch_dtype=torch.bfloat16,
)
```

## Using `from_single_file` with the Quanto Backend

`QuantoConfig` is compatible with `~FromOriginalModelMixin.from_single_file`. 

```python
import torch
from diffusers import FluxTransformer2DModel, QuantoConfig

ckpt_path = "https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/flux1-dev.safetensors"
quantization_config = QuantoConfig(weights_dtype="float8")
transformer = FluxTransformer2DModel.from_single_file(ckpt_path, quantization_config=quantization_config, torch_dtype=torch.bfloat16)
```

## Saving Quantized models

Diffusers supports serializing Quanto models using the `~ModelMixin.save_pretrained` method.

The serialization and loading requirements are different for models quantized directly with the Quanto library and models quantized
with Diffusers using Quanto as the backend. It is currently not possible to load models quantized directly with Quanto into Diffusers using `~ModelMixin.from_pretrained`

```python
import torch
from diffusers import FluxTransformer2DModel, QuantoConfig

model_id = "black-forest-labs/FLUX.1-dev"
quantization_config = QuantoConfig(weights_dtype="float8")
transformer = FluxTransformer2DModel.from_pretrained(
      model_id,
      subfolder="transformer",
      quantization_config=quantization_config,
      torch_dtype=torch.bfloat16,
)
# save quantized model to reuse
transformer.save_pretrained("<your quantized model save path>")

# you can reload your quantized model with
model = FluxTransformer2DModel.from_pretrained("<your quantized model save path>")
```

## Using `torch.compile` with Quanto

Currently the Quanto backend supports `torch.compile` for the following quantization types:

- `int8` weights 

```python
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel, QuantoConfig

model_id = "black-forest-labs/FLUX.1-dev"
quantization_config = QuantoConfig(weights_dtype="int8")
transformer = FluxTransformer2DModel.from_pretrained(
    model_id,
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=torch.bfloat16,
)
transformer = torch.compile(transformer, mode="max-autotune", fullgraph=True)

pipe = FluxPipeline.from_pretrained(
    model_id, transformer=transformer, torch_dtype=torch_dtype
)
pipe.to("cuda")
images = pipe("A cat holding a sign that says hello").images[0]
images.save("flux-quanto-compile.png")
```

## Supported Quantization Types

### Weights

- float8
- int8
- int4
- int2




<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/quanto.md" />

### NVIDIA ModelOpt
https://huggingface.co/docs/diffusers/main/quantization/modelopt.md

# NVIDIA ModelOpt

[NVIDIA-ModelOpt](https://github.com/NVIDIA/TensorRT-Model-Optimizer) is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.

Before you begin, make sure you have nvidia_modelopt installed.

```bash
pip install -U "nvidia_modelopt[hf]"
```

Quantize a model by passing `NVIDIAModelOptConfig` to [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) (you can also load pre-quantized models). This works for any model in any modality, as long as it supports loading with [Accelerate](https://hf.co/docs/accelerate/index) and contains `torch.nn.Linear` layers.

The example below only quantizes the weights to FP8.

```python
import torch
from diffusers import AutoModel, SanaPipeline, NVIDIAModelOptConfig

model_id = "Efficient-Large-Model/Sana_600M_1024px_diffusers"
dtype = torch.bfloat16

quantization_config = NVIDIAModelOptConfig(quant_type="FP8", quant_method="modelopt")
transformer = AutoModel.from_pretrained(
    model_id,
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=dtype,
)
pipe = SanaPipeline.from_pretrained(
    model_id,
    transformer=transformer,
    torch_dtype=dtype,
)
pipe.to("cuda")

print(f"Pipeline memory usage: {torch.cuda.max_memory_reserved() / 1024**3:.3f} GB")

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512
).images[0]
image.save("output.png")
```

> **Note:**
>
> The quantization methods in NVIDIA-ModelOpt are designed to reduce the memory footprint of model weights using various QAT (Quantization-Aware Training) and PTQ (Post-Training Quantization) techniques while maintaining model performance. However, the actual performance gain during inference depends on the deployment framework (e.g., TRT-LLM, TensorRT) and the specific hardware configuration.  
> 
> More details can be found [here](https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples).

## NVIDIAModelOptConfig

The `NVIDIAModelOptConfig` class accepts three parameters:
- `quant_type`: A string value mentioning one of the quantization types below.
- `modules_to_not_convert`: A list of module full/partial module names for which quantization should not be performed. For example, to not perform any quantization of the [SD3Transformer2DModel](/docs/diffusers/main/en/api/models/sd3_transformer2d#diffusers.SD3Transformer2DModel)'s pos_embed projection blocks, one would specify: `modules_to_not_convert=["pos_embed.proj.weight"]`.
- `disable_conv_quantization`: A boolean value which when set to `True` disables quantization for all convolutional layers in the model. This is useful as channel and block quantization generally don't work well with convolutional layers (used with INT4, NF4, NVFP4). If you want to disable quantization for specific convolutional layers, use `modules_to_not_convert` instead.
- `algorithm`: The algorithm to use for determining scale, defaults to `"max"`. You can check modelopt documentation for more algorithms and details.
- `forward_loop`: The forward loop function to use for calibrating activation during quantization. If not provided, it relies on static scale values computed using the weights only.
- `kwargs`: A dict of keyword arguments to pass to the underlying quantization method which will be invoked based on `quant_type`.

## Supported quantization types

ModelOpt supports weight-only, channel and block quantization int8, fp8, int4, nf4, and nvfp4. The quantization methods are designed to reduce the memory footprint of the model weights while maintaining the performance of the model during inference.

Weight-only quantization stores the model weights in a specific low-bit data type but performs computation with a higher-precision data type, like `bfloat16`. This lowers the memory requirements from model weights but retains the memory peaks for activation computation.

The quantization methods supported are as follows:

| **Quantization Type** | **Supported Schemes** | **Required Kwargs** | **Additional Notes** |
|-----------------------|-----------------------|---------------------|----------------------|
| **INT8** | `int8 weight only`, `int8 channel quantization`, `int8 block quantization` | `quant_type`, `quant_type + channel_quantize`, `quant_type + channel_quantize + block_quantize` |
| **FP8** | `fp8 weight only`, `fp8 channel quantization`, `fp8 block quantization` | `quant_type`, `quant_type + channel_quantize`, `quant_type + channel_quantize + block_quantize` |
| **INT4** | `int4 weight only`, `int4 block quantization` | `quant_type`, `quant_type + channel_quantize + block_quantize` | `channel_quantize = -1 is only supported for now`|
| **NF4** | `nf4 weight only`, `nf4 double block quantization` | `quant_type`, `quant_type + channel_quantize + block_quantize + scale_channel_quantize` + `scale_block_quantize` | `channel_quantize = -1 and scale_channel_quantize = -1 are only supported for now` |
| **NVFP4** | `nvfp4 weight only`, `nvfp4 block quantization` | `quant_type`, `quant_type + channel_quantize + block_quantize` | `channel_quantize = -1 is only supported for now`|


Refer to the [official modelopt documentation](https://nvidia.github.io/TensorRT-Model-Optimizer/) for a better understanding of the available quantization methods and the exhaustive list of configuration options available.

## Serializing and Deserializing quantized models

To serialize a quantized model in a given dtype, first load the model with the desired quantization dtype and then save it using the [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained) method.

```python
import torch
from diffusers import AutoModel, NVIDIAModelOptConfig
from modelopt.torch.opt import enable_huggingface_checkpointing

enable_huggingface_checkpointing()

model_id = "Efficient-Large-Model/Sana_600M_1024px_diffusers"
quant_config_fp8 = {"quant_type": "FP8", "quant_method": "modelopt"}
quant_config_fp8 = NVIDIAModelOptConfig(**quant_config_fp8)
model = AutoModel.from_pretrained(
    model_id,
    subfolder="transformer",
    quantization_config=quant_config_fp8,
    torch_dtype=torch.bfloat16,
)
model.save_pretrained('path/to/sana_fp8', safe_serialization=False)
```

To load a serialized quantized model, use the [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) method.

```python
import torch
from diffusers import AutoModel, NVIDIAModelOptConfig, SanaPipeline
from modelopt.torch.opt import enable_huggingface_checkpointing

enable_huggingface_checkpointing()

quantization_config = NVIDIAModelOptConfig(quant_type="FP8", quant_method="modelopt")
transformer = AutoModel.from_pretrained(
    "path/to/sana_fp8",
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=torch.bfloat16,
)
pipe = SanaPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_600M_1024px_diffusers",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512
).images[0]
image.save("output.png")
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/modelopt.md" />

### GGUF
https://huggingface.co/docs/diffusers/main/quantization/gguf.md

# GGUF

The GGUF file format is typically used to store models for inference with [GGML](https://github.com/ggerganov/ggml) and supports a variety of block wise quantization options. Diffusers supports loading checkpoints prequantized and saved in the GGUF format via `from_single_file` loading with Model classes. Loading GGUF checkpoints via Pipelines is currently not supported.

The following example will load the [FLUX.1 DEV](https://huggingface.co/black-forest-labs/FLUX.1-dev) transformer model using the GGUF Q2_K quantization variant.

Before starting please install gguf in your environment

```shell
pip install -U gguf
```

Since GGUF is a single file format, use `~FromSingleFileMixin.from_single_file` to load the model and pass in the [GGUFQuantizationConfig](/docs/diffusers/main/en/api/quantization#diffusers.GGUFQuantizationConfig).

When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.uint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.

The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original [`numpy`](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py) implementation by [compilade](https://github.com/compilade).

```python
import torch

from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig

ckpt_path = (
    "https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q2_K.gguf"
)
transformer = FluxTransformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16,
)
pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, generator=torch.manual_seed(0)).images[0]
image.save("flux-gguf.png")
```

## Using Optimized CUDA Kernels with GGUF

Optimized CUDA kernels can accelerate GGUF quantized model inference by approximately 10%. This functionality requires a compatible GPU with `torch.cuda.get_device_capability` greater than 7 and the kernels library:

```shell
pip install -U kernels
```

Once installed, set `DIFFUSERS_GGUF_CUDA_KERNELS=true`  to use optimized kernels when available. Note that CUDA kernels may introduce minor numerical differences compared to the original GGUF implementation, potentially causing subtle visual variations in generated images. To disable CUDA kernel usage, set the environment variable `DIFFUSERS_GGUF_CUDA_KERNELS=false`.

## Supported Quantization Types

- BF16
- Q4_0
- Q4_1
- Q5_0
- Q5_1
- Q8_0
- Q2_K
- Q3_K
- Q4_K
- Q5_K
- Q6_K

## Convert to GGUF

Use the Space below to convert a Diffusers checkpoint into the GGUF format for inference.
run conversion:

<iframe
	src="https://diffusers-internal-dev-diffusers-to-gguf.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>


```py
import torch

from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig

ckpt_path = (
    "https://huggingface.co/sayakpaul/different-lora-from-civitai/blob/main/flux_dev_diffusers-q4_0.gguf"
)
transformer = FluxTransformer2DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    config="black-forest-labs/FLUX.1-dev",
    subfolder="transformer",
    torch_dtype=torch.bfloat16,
)
pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, generator=torch.manual_seed(0)).images[0]
image.save("flux-gguf.png")
```

When using Diffusers format GGUF checkpoints, it's a must to provide the model `config` path. If the
model config resides in a `subfolder`, that needs to be specified, too.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/gguf.md" />

### torchao
https://huggingface.co/docs/diffusers/main/quantization/torchao.md

# torchao

[torchao](https://github.com/pytorch/ao) provides high-performance dtypes and optimizations based on quantization and sparsity for inference and training PyTorch models. It is supported for any model in any modality, as long as it supports loading with [Accelerate](https://hf.co/docs/accelerate/index) and contains `torch.nn.Linear` layers.

Make sure Pytorch 2.5+ and torchao are installed with the command below.

```bash
uv pip install -U torch torchao
```

Each quantization dtype is available as a separate instance of a [AOBaseConfig](https://docs.pytorch.org/ao/main/api_ref_quantization.html#inference-apis-for-quantize) class. This provides more flexible configuration options by exposing more available arguments.

Pass the `AOBaseConfig` of a quantization dtype, like [Int4WeightOnlyConfig](https://docs.pytorch.org/ao/main/generated/torchao.quantization.Int4WeightOnlyConfig) to [TorchAoConfig](/docs/diffusers/main/en/api/quantization#diffusers.TorchAoConfig) in [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained).

```py
import torch
from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConfig
from torchao.quantization import Int8WeightOnlyConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={"transformer": TorchAoConfig(Int8WeightOnlyConfig(group_size=128)))}
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantzation_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)
```

For simple use cases, you could also provide a string identifier in `TorchAo` as shown below.

```py
import torch
from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={"transformer": TorchAoConfig("int8wo")}
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantzation_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)
```

## torch.compile

torchao supports [torch.compile](../optimization/fp16#torchcompile) which can speed up inference with one line of code.

```python
import torch
from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConfig
from torchao.quantization import Int4WeightOnlyConfig

pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={"transformer": TorchAoConfig(Int4WeightOnlyConfig(group_size=128)))}
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantzation_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)

pipeline.transformer.compile(transformer, mode="max-autotune", fullgraph=True)
```

Refer to this [table](https://github.com/huggingface/diffusers/pull/10009#issue-2688781450) for inference speed and memory usage benchmarks with Flux and CogVideoX. More benchmarks on various hardware are also available in the torchao [repository](https://github.com/pytorch/ao/tree/main/torchao/quantization#benchmarks).

> [!TIP]
> The FP8 post-training quantization schemes in torchao are effective for GPUs with compute capability of at least 8.9 (RTX-4090, Hopper, etc.). FP8 often provides the best speed, memory, and quality trade-off when generating images and videos. We recommend combining FP8 and torch.compile if your GPU is compatible.

## autoquant

torchao provides [autoquant](https://docs.pytorch.org/ao/stable/generated/torchao.quantization.autoquant.html#torchao.quantization.autoquant) an automatic quantization API. Autoquantization chooses the best quantization strategy by comparing the performance of each strategy on chosen input types and shapes. This is only supported in Diffusers for individual models at the moment.

```py
import torch
from diffusers import DiffusionPipeline
from torchao.quantization import autoquant

# Load the pipeline
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell",
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)

transformer = autoquant(pipeline.transformer)
```

## Supported quantization types

torchao supports weight-only quantization and weight and dynamic-activation quantization for int8, float3-float8, and uint1-uint7.

Weight-only quantization stores the model weights in a specific low-bit data type but performs computation with a higher-precision data type, like `bfloat16`. This lowers the memory requirements from model weights but retains the memory peaks for activation computation.

Dynamic activation quantization stores the model weights in a low-bit dtype, while also quantizing the activations on-the-fly to save additional memory. This lowers the memory requirements from model weights, while also lowering the memory overhead from activation computations. However, this may come at a quality tradeoff at times, so it is recommended to test different models thoroughly.

The quantization methods supported are as follows:

| **Category** | **Full Function Names** | **Shorthands** |
|--------------|-------------------------|----------------|
| **Integer quantization** | `int4_weight_only`, `int8_dynamic_activation_int4_weight`, `int8_weight_only`, `int8_dynamic_activation_int8_weight` | `int4wo`, `int4dq`, `int8wo`, `int8dq` |
| **Floating point 8-bit quantization** | `float8_weight_only`, `float8_dynamic_activation_float8_weight`, `float8_static_activation_float8_weight` | `float8wo`, `float8wo_e5m2`, `float8wo_e4m3`, `float8dq`, `float8dq_e4m3`, `float8dq_e4m3_tensor`, `float8dq_e4m3_row` |
| **Floating point X-bit quantization** | `fpx_weight_only` | `fpX_eAwB` where `X` is the number of bits (1-7), `A` is exponent bits, and `B` is mantissa bits. Constraint: `X == A + B + 1` |
| **Unsigned Integer quantization** | `uintx_weight_only` | `uint1wo`, `uint2wo`, `uint3wo`, `uint4wo`, `uint5wo`, `uint6wo`, `uint7wo` |

Some quantization methods are aliases (for example, `int8wo` is the commonly used shorthand for `int8_weight_only`). This allows using the quantization methods described in the torchao docs as-is, while also making it convenient to remember their shorthand notations.

Refer to the [official torchao documentation](https://docs.pytorch.org/ao/stable/index.html) for a better understanding of the available quantization methods and the exhaustive list of configuration options available.

## Serializing and Deserializing quantized models

To serialize a quantized model in a given dtype, first load the model with the desired quantization dtype and then save it using the [save_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.save_pretrained) method.

```python
import torch
from diffusers import AutoModel, TorchAoConfig

quantization_config = TorchAoConfig("int8wo")
transformer = AutoModel.from_pretrained(
    "black-forest-labs/Flux.1-Dev",
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=torch.bfloat16,
)
transformer.save_pretrained("/path/to/flux_int8wo", safe_serialization=False)
```

To load a serialized quantized model, use the [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) method.

```python
import torch
from diffusers import FluxPipeline, AutoModel

transformer = AutoModel.from_pretrained("/path/to/flux_int8wo", torch_dtype=torch.bfloat16, use_safetensors=False)
pipe = FluxPipeline.from_pretrained("black-forest-labs/Flux.1-Dev", transformer=transformer, torch_dtype=torch.bfloat16)
pipe.to("cuda")

prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.0).images[0]
image.save("output.png")
```

If you are using `torch<=2.6.0`, some quantization methods, such as `uint4wo`, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source.

```python
import torch
from accelerate import init_empty_weights
from diffusers import FluxPipeline, AutoModel, TorchAoConfig

# Serialize the model
transformer = AutoModel.from_pretrained(
    "black-forest-labs/Flux.1-Dev",
    subfolder="transformer",
    quantization_config=TorchAoConfig("uint4wo"),
    torch_dtype=torch.bfloat16,
)
transformer.save_pretrained("/path/to/flux_uint4wo", safe_serialization=False, max_shard_size="50GB")
# ...

# Load the model
state_dict = torch.load("/path/to/flux_uint4wo/diffusion_pytorch_model.bin", weights_only=False, map_location="cpu")
with init_empty_weights():
    transformer = AutoModel.from_config("/path/to/flux_uint4wo/config.json")
transformer.load_state_dict(state_dict, strict=True, assign=True)
```

> [!TIP]
> The [AutoModel](/docs/diffusers/main/en/api/models/auto_model#diffusers.AutoModel) API is supported for PyTorch >= 2.6 as shown in the examples below.

## Resources

- [TorchAO Quantization API](https://docs.pytorch.org/ao/stable/index.html)
- [Diffusers-TorchAO examples](https://github.com/sayakpaul/diffusers-torchao)


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/quantization/torchao.md" />

### DreamBooth
https://huggingface.co/docs/diffusers/main/using-diffusers/dreambooth.md

# DreamBooth

[DreamBooth](https://huggingface.co/papers/2208.12242) is a method for generating personalized images of a specific instance. It works by fine-tuning the model on 3-5 images of the subject (for example, a cat) that is associated with a unique identifier (`sks cat`). This allows you to use `sks cat` in your prompt to trigger the model to generate images of your cat in different settings, lighting, poses, and styles.

DreamBooth checkpoints are typically a few GBs in size because it contains the full model weights.

Load the DreamBooth checkpoint with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) and include the unique identifier in the prompt to activate its generation.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "sd-dreambooth-library/herge-style",
    torch_dtype=torch.float16
).to("cuda")
prompt = "A cute sks herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration"
pipeline(prompt).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_dreambooth.png" />
</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/dreambooth.md" />

### IP-Adapter
https://huggingface.co/docs/diffusers/main/using-diffusers/ip_adapter.md

# IP-Adapter

[IP-Adapter](https://huggingface.co/papers/2308.06721) is a lightweight adapter designed to integrate image-based guidance with text-to-image diffusion models. The adapter uses an image encoder to extract image features that are passed to the newly added cross-attention layers in the UNet and fine-tuned. The original UNet model and the existing cross-attention layers corresponding to text features is frozen. Decoupling the cross-attention for image and text features enables more fine-grained and controllable generation.

IP-Adapter files are typically ~100MBs because they only contain the image embeddings. This means you need to load a model first, and then load the IP-Adapter with [load_ip_adapter()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter).

> [!TIP]
> IP-Adapters are available to many models such as [Flux](../api/pipelines/flux#ip-adapter) and [Stable Diffusion 3](../api/pipelines/stable_diffusion/stable_diffusion_3), and more. The examples in this guide use Stable Diffusion and Stable Diffusion XL.

Use the [set_ip_adapter_scale()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.set_ip_adapter_scale) parameter to scale the influence of the IP-Adapter during generation. A value of `1.0` means the model is only conditioned on the image prompt, and `0.5` typically produces balanced results between the text and image prompt.

```py
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image

pipeline = AutoPipelineForText2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name="ip-adapter_sdxl.bin"
)
pipeline.set_ip_adapter_scale(0.8)
```

Pass an image to `ip_adapter_image` along with a text prompt to generate an image.

```py
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")
pipeline(
    prompt="a polar bear sitting in a chair drinking a milkshake",
    ip_adapter_image=image,
    negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png" width="400" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner_2.png" width="400" alt="generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

Take a look at the examples below to learn how to use IP-Adapter for other tasks.

<hfoptions id="usage">
<hfoption id="image-to-image">

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name="ip-adapter_sdxl.bin"
)
pipeline.set_ip_adapter_scale(0.8)

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png")
ip_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png")
pipeline(
    prompt="best quality, high quality",
    image=image,
    ip_adapter_image=ip_image,
    strength=0.5,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png" width="300" alt="input image"/>
    <figcaption style="text-align: center;">input image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png" width="300" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_3.png" width="300" alt="generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

</hfoption>
<hfoption id="inpainting">

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name="ip-adapter_sdxl.bin"
)
pipeline.set_ip_adapter_scale(0.6)

mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_mask.png")
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png")
ip_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png")
pipeline(
    prompt="a cute gummy bear waving",
    image=image,
    mask_image=mask_image,
    ip_adapter_image=ip_image,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_bear_1.png" width="300" alt="input image"/>
    <figcaption style="text-align: center;">input image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_gummy.png" width="300" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png" width="300" alt="generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

</hfoption>
<hfoption id="video">

The [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) method is useful for reducing memory and it should be enabled **after** the IP-Adapter is loaded. Otherwise, the IP-Adapter's image encoder is also offloaded to the CPU and returns an error.

```py
import torch
from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter
from diffusers.utils import export_to_gif
from diffusers.utils import load_image

adapter = MotionAdapter.from_pretrained(
  "guoyww/animatediff-motion-adapter-v1-5-2",
  torch_dtype=torch.float16
)
pipeline = AnimateDiffPipeline.from_pretrained(
  "emilianJR/epiCRealism",
  motion_adapter=adapter,
  torch_dtype=torch.float16
)
scheduler = DDIMScheduler.from_pretrained(
    "emilianJR/epiCRealism",
    subfolder="scheduler",
    clip_sample=False,
    timestep_spacing="linspace",
    beta_schedule="linear",
    steps_offset=1,
)
pipeline.scheduler = scheduler
pipeline.enable_vae_slicing()
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin")
pipeline.enable_model_cpu_offload()

ip_adapter_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png")
pipeline(
    prompt="A cute gummy bear waving",
    negative_prompt="bad quality, worse quality, low resolution",
    ip_adapter_image=ip_adapter_image,
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=50,
).frames[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_inpaint.png" width="400" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gummy_bear.gif" width="400" alt="generated video"/>
    <figcaption style="text-align: center;">generated video</figcaption>
  </figure>
</div>

</hfoption>
</hfoptions>

## Model variants

There are two variants of IP-Adapter, Plus and FaceID. The Plus variant uses patch embeddings and the ViT-H image encoder. FaceID variant uses face embeddings generated from InsightFace.

<hfoptions id="ipadapter-variants">
<hfoption id="IP-Adapter Plus">

```py
import torch
from transformers import CLIPVisionModelWithProjection, AutoPipelineForText2Image

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "h94/IP-Adapter",
    subfolder="models/image_encoder",
    torch_dtype=torch.float16
)

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    image_encoder=image_encoder,
    torch_dtype=torch.float16
).to("cuda")

pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name="ip-adapter-plus_sdxl_vit-h.safetensors"
)
```

</hfoption>
<hfoption id="IP-Adapter FaceID">

```py
import torch
from transformers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")

pipeline.load_ip_adapter(
  "h94/IP-Adapter-FaceID",
  subfolder=None,
  weight_name="ip-adapter-faceid_sdxl.bin",
  image_encoder_folder=None
)
```

To use a IP-Adapter FaceID Plus model, load the CLIP image encoder as well as [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection).

```py
from transformers import AutoPipelineForText2Image, CLIPVisionModelWithProjection

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "laion/CLIP-ViT-H-14-laion2B-s32B-b79K",
    torch_dtype=torch.float16,
)

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    image_encoder=image_encoder,
    torch_dtype=torch.float16
).to("cuda")

pipeline.load_ip_adapter(
  "h94/IP-Adapter-FaceID",
  subfolder=None,
  weight_name="ip-adapter-faceid-plus_sd15.bin"
)
```

</hfoption>
</hfoptions>

## Image embeddings

The `prepare_ip_adapter_image_embeds` generates image embeddings you can reuse if you're running the pipeline multiple times because you have more than one image. Loading and encoding multiple images each time you use the pipeline can be inefficient. Precomputing the image embeddings ahead of time, saving them to disk, and loading them when you need them is more efficient.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForImage2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")

image_embeds = pipeline.prepare_ip_adapter_image_embeds(
    ip_adapter_image=image,
    ip_adapter_image_embeds=None,
    device="cuda",
    num_images_per_prompt=1,
    do_classifier_free_guidance=True,
)

torch.save(image_embeds, "image_embeds.ipadpt")
```

Reload the image embeddings by passing them to the `ip_adapter_image_embeds` parameter. Set `image_encoder_folder` to `None` because you don't need the image encoder anymore to generate the image embeddings.

> [!TIP]
> You can also load image embeddings from other sources such as ComfyUI.

```py
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  image_encoder_folder=None,
  weight_name="ip-adapter_sdxl.bin"
)
pipeline.set_ip_adapter_scale(0.8)
image_embeds = torch.load("image_embeds.ipadpt")
pipeline(
    prompt="a polar bear sitting in a chair drinking a milkshake",
    ip_adapter_image_embeds=image_embeds,
    negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
    num_inference_steps=100,
    generator=generator,
).images[0]
```

## Masking

Binary masking enables assigning an IP-Adapter image to a specific area of the output image, making it useful for composing multiple IP-Adapter images. Each IP-Adapter image requires a binary mask.

Load the [IPAdapterMaskProcessor](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.image_processor.IPAdapterMaskProcessor) to preprocess the image masks. For the best results, provide the output `height` and `width` to ensure masks with different aspect ratios are appropriately sized. If the input masks already match the aspect ratio of the generated image, you don't need to set the `height` and `width`.

```py
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.image_processor import IPAdapterMaskProcessor
from diffusers.utils import load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")

mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png")
mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png")

processor = IPAdapterMaskProcessor()
masks = processor.preprocess([mask1, mask2], height=1024, width=1024)
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask1.png" width="200" alt="mask 1"/>
    <figcaption style="text-align: center;">mask 1</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask2.png" width="200" alt="mask 2"/>
    <figcaption style="text-align: center;">mask 2</figcaption>
  </figure>
</div>

Provide both the IP-Adapter images and their scales as a list. Pass the preprocessed masks to `cross_attention_kwargs` in the pipeline.

```py
face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png")
face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png")

pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"]
)
pipeline.set_ip_adapter_scale([[0.7, 0.7]])

ip_images = [[face_image1, face_image2]]
masks = [masks.reshape(1, masks.shape[0], masks.shape[2], masks.shape[3])]

pipeline(
  prompt="2 girls",
  ip_adapter_image=ip_images,
  negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
  cross_attention_kwargs={"ip_adapter_masks": masks}
).images[0]
```

<div style="display: flex; flex-direction: column; gap: 10px;">
  <div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
    <figure>
      <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl1.png" width="400" alt="IP-Adapter image 1"/>
      <figcaption style="text-align: center;">IP-Adapter image 1</figcaption>
    </figure>
    <figure>
      <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl2.png" width="400" alt="IP-Adapter image 2"/>
      <figcaption style="text-align: center;">IP-Adapter image 2</figcaption>
    </figure>
  </div>
  <div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
    <figure>
      <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_attention_mask_result_seed_0.png" width="400" alt="Generated image with mask"/>
      <figcaption style="text-align: center;">generated with mask</figcaption>
    </figure>
    <figure>
      <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_no_attention_mask_result_seed_0.png" width="400" alt="Generated image without mask"/>
      <figcaption style="text-align: center;">generated without mask</figcaption>
    </figure>
  </div>
</div>

## Applications

The section below covers some popular applications of IP-Adapter.

### Face models

Face generation and preserving its details can be challenging. To help generate more accurate faces, there are checkpoints specifically conditioned on images of cropped faces. You can find the face models in the [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter) repository or the [h94/IP-Adapter-FaceID](https://huggingface.co/h94/IP-Adapter-FaceID) repository. The FaceID checkpoints use the FaceID embeddings from [InsightFace](https://github.com/deepinsight/insightface) instead of CLIP image embeddings.

We recommend using the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler) or [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) for face models.

<hfoptions id="usage">
<hfoption id="h94/IP-Adapter">

```py
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image

pipeline = StableDiffusionPipeline.from_pretrained(
  "stable-diffusion-v1-5/stable-diffusion-v1-5",
  torch_dtype=torch.float16,
).to("cuda")
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="models", 
  weight_name="ip-adapter-full-face_sd15.bin"
)

pipeline.set_ip_adapter_scale(0.5)
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png")

pipeline(
    prompt="A photo of Einstein as a chef, wearing an apron, cooking in a French restaurant",
    ip_adapter_image=image,
    negative_prompt="lowres, bad anatomy, worst quality, low quality",
    num_inference_steps=100,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png" width="400" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein.png" width="400" alt="generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

</hfoption>
<hfoption id="h94/IP-Adapter-FaceID">

For FaceID models, extract the face embeddings and pass them as a list of tensors to `ip_adapter_image_embeds`.

```py
# pip install insightface
import torch
from diffusers import StableDiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image
from insightface.app import FaceAnalysis

pipeline = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
).to("cuda")
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.load_ip_adapter(
  "h94/IP-Adapter-FaceID",
  subfolder=None,
  weight_name="ip-adapter-faceid_sd15.bin",
  image_encoder_folder=None
)
pipeline.set_ip_adapter_scale(0.6)

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png")

ref_images_embeds = []
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB)
faces = app.get(image)
image = torch.from_numpy(faces[0].normed_embedding)
ref_images_embeds.append(image.unsqueeze(0))
ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0)
neg_ref_images_embeds = torch.zeros_like(ref_images_embeds)
id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device="cuda")

pipeline(
    prompt="A photo of a girl",
    ip_adapter_image_embeds=[id_embeds],
    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
).images[0]
```

The IP-Adapter FaceID Plus and Plus v2 models require CLIP image embeddings. Prepare the face embeddings and then extract and pass the CLIP embeddings to the hidden image projection layers.

```py
clip_embeds = pipeline.prepare_ip_adapter_image_embeds(
  [ip_adapter_images], None, torch.device("cuda"), num_images, True)[0]

pipeline.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
# set to True if using IP-Adapter FaceID Plus v2
pipeline.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False
```

</hfoption>
</hfoptions>

### Multiple IP-Adapters

Combine multiple IP-Adapters to generate images in more diverse styles. For example, you can use IP-Adapter Face to generate consistent faces and characters and IP-Adapter Plus to generate those faces in specific styles.

Load an image encoder with [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPVisionModelWithProjection).

```py
import torch
from diffusers import AutoPipelineForText2Image, DDIMScheduler
from transformers import CLIPVisionModelWithProjection
from diffusers.utils import load_image

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "h94/IP-Adapter",
    subfolder="models/image_encoder",
    torch_dtype=torch.float16,
)
```

Load a base model, scheduler and the following IP-Adapters.

- [ip-adapter-plus_sdxl_vit-h](https://huggingface.co/h94/IP-Adapter#ip-adapter-for-sdxl-10) uses patch embeddings and a ViT-H image encoder
- [ip-adapter-plus-face_sdxl_vit-h](https://huggingface.co/h94/IP-Adapter#ip-adapter-for-sdxl-10) uses patch embeddings and a ViT-H image encoder but it is conditioned on images of cropped faces

```py
pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    image_encoder=image_encoder,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"]
)
pipeline.set_ip_adapter_scale([0.7, 0.3])
# enable_model_cpu_offload to reduce memory usage
pipeline.enable_model_cpu_offload()
```

Load an image and a folder containing images of a certain style to apply.

```py
face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png")
style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy"
style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png" width="400" alt="Face image"/>
    <figcaption style="text-align: center;">face image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_style_grid.png" width="400" alt="Style images"/>
    <figcaption style="text-align: center;">style images</figcaption>
  </figure>
</div>

Pass style and face images as a list to `ip_adapter_image`.

```py
generator = torch.Generator(device="cpu").manual_seed(0)

pipeline(
    prompt="wonderwoman",
    ip_adapter_image=[style_images, face_image],
    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
).images[0]
```

<div style="display: flex; justify-content: center;">
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_multi_out.png" width="400" alt="Generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

### Instant generation

[Latent Consistency Models (LCM)](../api/pipelines/latent_consistency_models) can generate images 4 steps or less, unlike other diffusion models which require a lot more steps, making it feel "instantaneous". IP-Adapters are compatible with LCM models to instantly generate images.

Load the IP-Adapter weights and load the LoRA weights with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights).

```py
import torch
from diffusers import DiffusionPipeline, LCMScheduler
from diffusers.utils import load_image

pipeline = DiffusionPipeline.from_pretrained(
  "sd-dreambooth-library/herge-style",
  torch_dtype=torch.float16
)

pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="models",
  weight_name="ip-adapter_sd15.bin"
)
pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)
# enable_model_cpu_offload to reduce memory usage
pipeline.enable_model_cpu_offload()
```

Try using a lower IP-Adapter scale to condition generation more on the style you want to apply and remember to use the special token in your prompt to trigger its generation.

```py
pipeline.set_ip_adapter_scale(0.4)

prompt = "herge_style woman in armor, best quality, high quality"

ip_adapter_image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png")
pipeline(
    prompt=prompt,
    ip_adapter_image=ip_adapter_image,
    num_inference_steps=4,
    guidance_scale=1,
).images[0]
```

<div style="display: flex; justify-content: center;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_herge.png" width="400" alt="Generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

### Structural control

For structural control, combine IP-Adapter with [ControlNet](../api/pipelines/controlnet) conditioned on depth maps, edge maps, pose estimations, and more.

The example below loads a [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel) checkpoint conditioned on depth maps and combines it with a IP-Adapter.

```py
import torch
from diffusers.utils import load_image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel

controlnet = ControlNetModel.from_pretrained(
  "lllyasviel/control_v11f1p_sd15_depth",
  torch_dtype=torch.float16
)

pipeline = StableDiffusionControlNetPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    controlnet=controlnet,
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="models",
  weight_name="ip-adapter_sd15.bin"
)
```

Pass the depth map and IP-Adapter image to the pipeline.

```py
pipeline(
  prompt="best quality, high quality",
  image=depth_map,
  ip_adapter_image=ip_adapter_image,
  negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png" width="300" alt="IP-Adapter image"/>
    <figcaption style="text-align: center;">IP-Adapter image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png" width="300" alt="Depth map"/>
    <figcaption style="text-align: center;">depth map</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ipa-controlnet-out.png" width="300" alt="Generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

### Style and layout control

For style and layout control, combine IP-Adapter with [InstantStyle](https://huggingface.co/papers/2404.02733). InstantStyle separates *style* (color, texture, overall feel) and *content* from each other. It only applies the style in style-specific blocks of the model to prevent it from distorting other areas of an image. This generates images with stronger and more consistent styles and better control over the layout.

The IP-Adapter is only activated for specific parts of the model. Use the [set_ip_adapter_scale()](/docs/diffusers/main/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.set_ip_adapter_scale) method to scale the influence of the IP-Adapter in different layers. The example below activates the IP-Adapter in the second layer of the models down `block_2` and up `block_0`. Down `block_2` is where the IP-Adapter injects layout information and up `block_0` is where style is injected.

```py
import torch
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image

pipeline = AutoPipelineForText2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  torch_dtype=torch.float16
).to("cuda")
pipeline.load_ip_adapter(
  "h94/IP-Adapter",
  subfolder="sdxl_models",
  weight_name="ip-adapter_sdxl.bin"
)

scale = {
    "down": {"block_2": [0.0, 1.0]},
    "up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
```

Load the style image and generate an image.

```py
style_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg")

pipeline(
    prompt="a cat, masterpiece, best quality, high quality",
    ip_adapter_image=style_image,
    negative_prompt="text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry",
    guidance_scale=5,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" width="400" alt="Style image"/>
    <figcaption style="text-align: center;">style image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png" width="400" alt="Generated image"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

You can also insert the IP-Adapter in all the model layers. This tends to generate images that focus more on the image prompt and may reduce the diversity of generated images. Only activate the IP-Adapter in up `block_0` or the style layer.

> [!TIP]
> You don't need to specify all the layers in the `scale` dictionary. Layers not included are set to 0, which means the IP-Adapter is disabled.

```py
scale = {
    "up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)

pipeline(
    prompt="a cat, masterpiece, best quality, high quality",
    ip_adapter_image=style_image,
    negative_prompt="text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry",
    guidance_scale=5,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_only.png" width="400" alt="Generated image (style only)"/>
    <figcaption style="text-align: center;">style-layer generated image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_ip_adapter.png" width="400" alt="Generated image (IP-Adapter only)"/>
    <figcaption style="text-align: center;">all layers generated image</figcaption>
  </figure>
</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/ip_adapter.md" />

### Pipeline callbacks
https://huggingface.co/docs/diffusers/main/using-diffusers/callback.md

# Pipeline callbacks

A callback is a function that modifies [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) behavior and it is executed at the end of a denoising step. The changes are propagated to subsequent steps in the denoising process. It is useful for adjusting pipeline attributes or tensor variables to support new features without rewriting the underlying pipeline code.

Diffusers provides several callbacks in the pipeline [overview](../api/pipelines/overview#callbacks).

To enable a callback, configure when the callback is executed after a certain number of denoising steps with one of the following arguments.

- `cutoff_step_ratio` specifies when a callback is activated as a percentage of the total denoising steps.
- `cutoff_step_index` specifies the exact step number a callback is activated.

The example below uses `cutoff_step_ratio=0.4`, which means the callback is activated once denoising reaches 40% of the total inference steps. [SDXLCFGCutoffCallback](/docs/diffusers/main/en/api/pipelines/overview#diffusers.callbacks.SDXLCFGCutoffCallback) disables classifier-free guidance (CFG) after a certain number of steps, which can help save compute without significantly affecting performance.

Define a callback with either of the `cutoff` arguments and pass it to the `callback_on_step_end` parameter in the pipeline.

```py
import torch
from diffusers import DPMSolverMultistepScheduler, StableDiffusionXLPipeline
from diffusers.callbacks import SDXLCFGCutoffCallback

callback = SDXLCFGCutoffCallback(cutoff_step_ratio=0.4)
# if using cutoff_step_index
# callback = SDXLCFGCutoffCallback(cutoff_step_ratio=None, cutoff_step_index=10)

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, use_karras_sigmas=True)

prompt = "a sports car at the road, best quality, high quality, high detail, 8k resolution"
output = pipeline(
    prompt=prompt,
    negative_prompt="",
    guidance_scale=6.5,
    num_inference_steps=25,
    generator=generator,
    callback_on_step_end=callback,
)
```

If you want to add a new official callback, feel free to open a [feature request](https://github.com/huggingface/diffusers/issues/new/choose) or [submit a PR](https://huggingface.co/docs/diffusers/main/en/conceptual/contribution#how-to-open-a-pr). Otherwise, you can also create your own callback as shown below.

## Early stopping

Early stopping is useful if you aren't happy with the intermediate results during generation. This callback sets a hardcoded stop point after which the pipeline terminates by setting the `_interrupt` attribute to `True`.

```py
from diffusers import StableDiffusionXLPipeline

def interrupt_callback(pipeline, i, t, callback_kwargs):
    stop_idx = 10
    if i == stop_idx:
        pipeline._interrupt = True

    return callback_kwargs

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5"
)
num_inference_steps = 50

pipeline(
    "A photo of a cat",
    num_inference_steps=num_inference_steps,
    callback_on_step_end=interrupt_callback,
)
```

## Display intermediate images

Visualizing the intermediate images is useful for progress monitoring and assessing the quality of the generated content. This callback decodes the latent tensors at each step and converts them to images.

[Convert](https://huggingface.co/blog/TimothyAlexisVass/explaining-the-sdxl-latent-space) the Stable Diffusion XL latents from latents (4 channels) to RGB tensors (3 tensors).

```py
def latents_to_rgb(latents):
    weights = (
        (60, -60, 25, -70),
        (60,  -5, 15, -50),
        (60,  10, -5, -35),
    )

    weights_tensor = torch.t(torch.tensor(weights, dtype=latents.dtype).to(latents.device))
    biases_tensor = torch.tensor((150, 140, 130), dtype=latents.dtype).to(latents.device)
    rgb_tensor = torch.einsum("...lxy,lr -> ...rxy", latents, weights_tensor) + biases_tensor.unsqueeze(-1).unsqueeze(-1)
    image_array = rgb_tensor.clamp(0, 255).byte().cpu().numpy().transpose(1, 2, 0)

    return Image.fromarray(image_array)
```

Extract the latents and convert the first image in the batch to RGB. Save the image as a PNG file with the step number.

```py
def decode_tensors(pipe, step, timestep, callback_kwargs):
    latents = callback_kwargs["latents"]

    image = latents_to_rgb(latents[0])
    image.save(f"{step}.png")

    return callback_kwargs
```

Use the `callback_on_step_end_tensor_inputs` parameter to specify what input type to modify, which in this case, are the latents.

```py
import torch
from PIL import Image
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)

image = pipeline(
    prompt="A croissant shaped like a cute bear.",
    negative_prompt="Deformed, ugly, bad anatomy",
    callback_on_step_end=decode_tensors,
    callback_on_step_end_tensor_inputs=["latents"],
).images[0]
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/callback.md" />

### Schedulers
https://huggingface.co/docs/diffusers/main/using-diffusers/schedulers.md

# Schedulers

A scheduler is an algorithm that provides instructions to the denoising process such as how much noise to remove at a certain step. It takes the model prediction from step *t* and applies an update for how to compute the next sample at step *t-1*. Different schedulers produce different results; some are faster while others are more accurate.

Diffusers supports many schedulers and allows you to modify their timestep schedules, timestep spacing, and more, to generate high-quality images in fewer steps.

This guide will show you how to load and customize schedulers.

## Loading schedulers

Schedulers don't have any parameters and are defined in a configuration file. Access the `.scheduler` attribute of a pipeline to view the configuration.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, device_map="cuda"
)
pipeline.scheduler
```

Load a different scheduler with [from_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.from_pretrained) and specify the `subfolder` argument to load the configuration file into the correct subfolder of the pipeline repository. Pass the new scheduler to the existing pipeline.

```py
from diffusers import DPMSolverMultistepScheduler

dpm = DPMSolverMultistepScheduler.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler"
)
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    scheduler=dpm,
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler
```

## Timestep schedules

Timestep or noise schedule decides how noise is distributed over the denoising process. The schedule can be linear or more concentrated toward the beginning or end. It is a precomputed sequence of noise levels generated from the scheduler's default configuration, but it can be customized to use other schedules.

> [!TIP]
> The `timesteps` argument is only supported for a select list of schedulers and pipelines. Feel free to open a feature request if you want to extend these parameters to a scheduler and pipeline that does not currently support it!

The example below uses the [Align Your Steps (AYS)](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/) schedule which can generate a high-quality image in 10 steps, significantly speeding up generation and reducing computation time.

Import the schedule and pass it to the `timesteps` argument in the pipeline.

```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.schedulers import AysSchedules

sampling_schedule = AysSchedules["StableDiffusionXLTimesteps"]
print(sampling_schedule)
"[999, 845, 730, 587, 443, 310, 193, 116, 53, 13]"

pipeline = DiffusionPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
  pipeline.scheduler.config, algorithm_type="sde-dpmsolver++"
)

prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up"
image = pipeline(
    prompt=prompt,
    negative_prompt="",
    timesteps=sampling_schedule,
).images[0]
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ays.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">AYS timestep schedule 10 steps</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/10.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Linearly-spaced timestep schedule 10 steps</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/25.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Linearly-spaced timestep schedule 25 steps</figcaption>
  </div>
</div>

### Rescaling schedules

Denoising should begin with pure noise and the signal-to-noise (SNR) ration should be zero. However, some models don't actually start from pure noise which makes it difficult to generate images at brightness extremes.

> [!TIP]
> Train your own model with `v_prediction` by adding the `--prediction_type="v_prediction"` flag to your training script. You can also [search](https://huggingface.co/search/full-text?q=v_prediction&type=model) for existing models trained with `v_prediction`.

To fix this, a model must be trained with `v_prediction`. If a model is trained with `v_prediction`, then enable the following arguments in the scheduler.

- Set `rescale_betas_zero_snr=True` to rescale the noise schedule to the very last timestep with exactly zero SNR
- Set `timestep_spacing="trailing"` to force sampling from the last timestep with pure noise

```py
from diffusers import DiffusionPipeline, DDIMScheduler

pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", device_map="cuda")

pipeline.scheduler = DDIMScheduler.from_config(
    pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
```

Set `guidance_rescale` in the pipeline to avoid overexposed images. A lower value increases brightness, but some details may appear washed out.

```py
prompt = """
cinematic photo of a snowy mountain at night with the northern lights aurora borealis
overhead, 35mm photograph, film, professional, 4k, highly detailed
"""
image = pipeline(prompt, guidance_rescale=0.7).images[0]
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/no-zero-snr.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">default Stable Diffusion v2-1 image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/zero-snr.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">image with zero SNR and trailing timestep spacing enabled</figcaption>
  </div>
</div>

## Timestep spacing

Timestep spacing refers to the specific steps *t* to sample from from the schedule. Diffusers provides three spacing types as shown below.

| spacing strategy | spacing calculation | example timesteps |
|---|---|---|
| `leading` | evenly spaced steps | `[900, 800, 700, ..., 100, 0]` |
| `linspace` | include first and last steps and evenly divide remaining intermediate steps | `[1000, 888.89, 777.78, ..., 111.11, 0]` |
| `trailing` | include last step and evenly divide remaining intermediate steps beginning from the end | `[999, 899, 799, 699, 599, 499, 399, 299, 199, 99]` |

Pass the spacing strategy to the `timestep_spacing` argument in the scheduler.

> [!TIP]
> The `trailing` strategy typically produces higher quality images with more details with fewer steps, but the difference in quality is not as obvious for more standard step values.

```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler

pipeline = DiffusionPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
  pipeline.scheduler.config, timestep_spacing="trailing"
)

prompt = "A cinematic shot of a cute little black cat sitting on a pumpkin at night"
image = pipeline(
    prompt=prompt,
    negative_prompt="",
    num_inference_steps=5,
).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/trailing_spacing.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">trailing spacing after 5 steps</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/leading_spacing.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">leading spacing after 5 steps</figcaption>
  </div>
</div>

## Sigmas

Sigmas is a measure of how noisy a sample is at a certain step as defined by the schedule. When using custom `sigmas`, the `timesteps` are calculated from these values instead of the default scheduler configuration.

> [!TIP]
> The `sigmas` argument is only supported for a select list of schedulers and pipelines. Feel free to open a feature request if you want to extend these parameters to a scheduler and pipeline that does not currently support it!

Pass the custom sigmas to the `sigmas` argument in the pipeline. The example below uses the [sigmas](https://github.com/huggingface/diffusers/blob/6529ee67ec02fcf58d2fd9242164ea002b351d75/src/diffusers/schedulers/scheduling_utils.py#L55) from the 10-step AYS schedule.

```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler

pipeline = DiffusionPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
  pipeline.scheduler.config, algorithm_type="sde-dpmsolver++"
)

sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.0]
prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up"
image = pipeline(
    prompt=prompt,
    negative_prompt="",
    sigmas=sigmas,
).images[0]
```

### Karras sigmas

[Karras sigmas](https://huggingface.co/papers/2206.00364) resamples the noise schedule for more efficient sampling by clustering sigmas more densely in the middle of the sequence where structure reconstruction is critical, while using fewer sigmas at the beginning and end where noise changes have less impact. This can increase the level of details in a generated image.

Set `use_karras_sigmas=True` in the scheduler to enable it.

```py
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler

pipeline = DiffusionPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
  pipeline.scheduler.config,
  algorithm_type="sde-dpmsolver++",
  use_karras_sigmas=True,
)

prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up"
image = pipeline(
    prompt=prompt,
    negative_prompt="",
    sigmas=sigmas,
).images[0]
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/karras_sigmas_true.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Karras sigmas enabled</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/karras_sigmas_false.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Karras sigmas disabled</figcaption>
  </div>
</div>

Refer to the scheduler API [overview](../api/schedulers/overview) for a list of schedulers that support Karras sigmas. It should only be used for models trained with Karras sigmas.

## Choosing a scheduler

It's important to try different schedulers to find the best one for your use case. Here are a few recommendations to help you get started.

- DPM++ 2M SDE Karras is generally a good all-purpose option.
- [TCDScheduler](/docs/diffusers/main/en/api/schedulers/tcd#diffusers.TCDScheduler) works well for distilled models.
- [FlowMatchEulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler) and [FlowMatchHeunDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/flow_match_heun_discrete#diffusers.FlowMatchHeunDiscreteScheduler) for FlowMatch models.
- [EulerDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) or [EulerAncestralDiscreteScheduler](/docs/diffusers/main/en/api/schedulers/euler_ancestral#diffusers.EulerAncestralDiscreteScheduler) for generating anime style images.
- DPM++ 2M paired with [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler) on SDXL for generating realistic images.

## Resources

- Read the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) paper for more details about rescaling the noise schedule to enforce zero SNR.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/schedulers.md" />

### Prompting
https://huggingface.co/docs/diffusers/main/using-diffusers/weighted_prompts.md

# Prompting

Prompts describes what a model should generate. Good prompts are detailed, specific, and structured and they generate better images and videos.

This guide shows you how to write effective prompts and introduces techniques that make them stronger.

## Writing good prompts

Every effective prompt needs three core elements.

1. <span class="underline decoration-sky-500 decoration-2 underline-offset-4">Subject</span> - what you want to generate. Start your prompt here.
2. <span class="underline decoration-pink-500 decoration-2 underline-offset-4">Style</span> - the medium or aesthetic. How should it look?
3. <span class="underline decoration-green-500 decoration-2 underline-offset-4">Context</span> - details about actions, setting, and mood.

Use these elements as a structured narrative, not a keyword list. Modern models understand language better than keyword matching. Start simple, then add details.

Context is especially important for creating better prompts. Try adding lighting, artistic details, and mood.

<div class="flex gap-4">
  <div class="flex-1 text-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ok-prompt.png" class="w-full h-auto object-cover rounded-lg">
    <figcaption class="mt-2 text-sm text-gray-500">A <span class="underline decoration-sky-500 decoration-2 underline-offset-1">cute cat</span> <span class="underline decoration-pink-500 decoration-2 underline-offset-1">lounges on a leaf in a pool during a peaceful summer afternoon</span>, in <span class="underline decoration-green-500 decoration-2 underline-offset-1">lofi art style, illustration</span>.</figcaption>
  </div>
  <div class="flex-1 text-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/better-prompt.png" class="w-full h-auto object-cover rounded-lg"/>
    <figcaption class="mt-2 text-sm text-gray-500">A cute cat lounges on a floating leaf in a sparkling pool during a peaceful summer afternoon. Clear reflections ripple across the water, with sunlight casting soft, smooth highlights. The illustration is detailed and polished, with elegant lines and harmonious colors, evoking a relaxing, serene, and whimsical lofi mood, anime-inspired and visually comforting.</figcaption>
  </div>
</div>

Be specific and add context. Use photography terms like lens type, focal length, camera angles, and depth of field.

> [!TIP]
> Try a [prompt enhancer](https://huggingface.co/models?sort=downloads&search=prompt+enhancer) to help improve your prompt structure.

## Prompt weighting

Prompt weighting makes some words stronger and others weaker. It scales attention scores so you control how much influence each concept has.

Diffusers handles this through `prompt_embeds` and `pooled_prompt_embeds` arguments which take scaled text embedding vectors. Use the [sd_embed](https://github.com/xhinker/sd_embed) library to generate these embeddings. It also supports longer prompts.

> [!NOTE]
> The sd_embed library only supports Stable Diffusion, Stable Diffusion XL, Stable Diffusion 3, Stable Cascade, and Flux. Prompt weighting doesn't necessarily help for newer models like Flux which already has very good prompt adherence.

```py
!uv pip install git+https://github.com/xhinker/sd_embed.git@main
```

Format weighted text with numerical multipliers or parentheses. More parentheses mean stronger weighting.

| format | multiplier |
|---|---|
| `(cat)` | increase by 1.1x |
| `((cat))` | increase by 1.21x |
| `(cat:1.5)` | increase by 1.5x |
| `(cat:0.5)` | decrease by 4x |

Create a weighted prompt and pass it to [get_weighted_text_embeddings_sdxl](https://github.com/xhinker/sd_embed/blob/4a47f71150a22942fa606fb741a1c971d95ba56f/src/sd_embed/embedding_funcs.py#L405) to generate embeddings.

> [!TIP]
> You could also pass negative prompts to `negative_prompt_embeds` and `negative_pooled_prompt_embeds`.

```py
import torch
from diffusers import DiffusionPipeline
from sd_embed.embedding_funcs import get_weighted_text_embeddings_sdxl

pipeline = DiffusionPipeline.from_pretrained(
    "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.bfloat16, device_map="cuda"
)

prompt = """
A (cute cat:1.4) lounges on a (floating leaf:1.2) in a (sparkling pool:1.1) during a peaceful summer afternoon.
Gentle ripples reflect pastel skies, while (sunlight:1.1) casts soft highlights. The illustration is smooth and polished
with elegant, sketchy lines and subtle gradients, evoking a ((whimsical, nostalgic, dreamy lofi atmosphere:2.0)), 
(anime-inspired:1.6), calming, comforting, and visually serene.
"""

prompt_embeds, _, pooled_prompt_embeds, *_ = get_weighted_text_embeddings_sdxl(pipeline, prompt=prompt)
```

Pass the embeddings to `prompt_embeds` and `pooled_prompt_embeds` to generate your image.

```py
image = pipeline(prompt_embeds=prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds).images[0]
```

<div class="flex justify-center">
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/prompt-embed-sdxl.png"/>
</div>

Prompt weighting works with [Textual inversion](./textual_inversion_inference) and [DreamBooth](./dreambooth) adapters too.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/weighted_prompts.md" />

### Inpainting
https://huggingface.co/docs/diffusers/main/using-diffusers/inpaint.md

# Inpainting


Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt.

With 🤗 Diffusers, here is how you can do inpainting:

1. Load an inpainting checkpoint with the [AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting) class. This'll automatically detect the appropriate pipeline class to load based on the checkpoint:

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()
```

> [!TIP]
> You'll notice throughout the guide, we use [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) and [enable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention), to save memory and increase inference speed. If you're using PyTorch 2.0, it's not necessary to call [enable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention) on your pipeline because it'll already be using PyTorch 2.0's native [scaled-dot product attention](../optimization/fp16#scaled-dot-product-attention).

2. Load the base and mask images:

```py
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")
```

3. Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images:

```py
prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k"
negative_prompt = "bad anatomy, deformed, ugly, disfigured"
image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">base image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">mask image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-cat.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Create a mask image

Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you'll need to create a mask image for it. Use the Space below to easily create a mask image.

Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you're done, click **Run** to generate and download the mask image.

<iframe
  src="https://stevhliu-inpaint-mask-maker.hf.space"
  frameborder="0"
  width="850"
  height="450"
></iframe>

### Mask blur

The `~VaeImageProcessor.blur` method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the `blur_factor` parameter. Increasing the `blur_factor` increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero `blur_factor` preserves the sharper edges of the mask.

To use this, create a blurred mask with the image processor.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
from PIL import Image

pipeline = AutoPipelineForInpainting.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda')

mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png")
blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33)
blurred_mask
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">mask with no blur</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/mask_blurred.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">mask with blur applied</figcaption>
  </div>
</div>

## Popular models

[Stable Diffusion Inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting), [Stable Diffusion XL (SDXL) Inpainting](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1), and [Kandinsky 2.2 Inpainting](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder-inpaint) are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images.

### Stable Diffusion Inpainting

Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you'll need to pass a prompt, base and mask image to the pipeline:

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

### Stable Diffusion XL (SDXL) Inpainting

SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the [SDXL](sdxl) guide for a more comprehensive guide on how to use SDXL and configure it's parameters.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

### Kandinsky 2.2 Inpainting

The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the [AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting) class which uses the [KandinskyV22InpaintCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22InpaintCombinedPipeline) under the hood.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">base image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-sdv1.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion Inpainting</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-sdxl.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion XL Inpainting</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-kandinsky.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Kandinsky 2.2 Inpainting</figcaption>
  </div>
</div>

## Non-inpaint specific checkpoints


So far, this guide has used inpaint specific checkpoints such as [stable-diffusion-v1-5/stable-diffusion-inpainting](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-inpainting). But you can also use regular checkpoints like [stable-diffusion-v1-5/stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5). Let's compare the results of the two checkpoints.

The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You'll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural.

<hfoptions id="regular-specific">
<hfoption id="stable-diffusion-v1-5/stable-diffusion-v1-5">

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

</hfoption>
<hfoption id="runwayml/stable-diffusion-inpainting">

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

</hfoption>
</hfoptions>

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-inpaint-specific.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">stable-diffusion-v1-5/stable-diffusion-v1-5</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-specific.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">runwayml/stable-diffusion-inpainting</figcaption>
  </div>
</div>

However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn't as noticeable of difference between the regular and inpaint checkpoint.

<hfoptions id="inpaint">
<hfoption id="stable-diffusion-v1-5/stable-diffusion-v1-5">

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png")

image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

</hfoption>
<hfoption id="runwayml/stable-diffusion-inpaint">

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png")

image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

</hfoption>
</hfoptions>

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/regular-inpaint-basic.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">stable-diffusion-v1-5/stable-diffusion-v1-5</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/specific-inpaint-basic.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">runwayml/stable-diffusion-inpainting</figcaption>
  </div>
</div>

The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area.

If preserving the unmasked area is important for your task, you can use the `VaeImageProcessor.apply_overlay` method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas.

```py
import PIL
import numpy as np
import torch

from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

device = "cuda"
pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting",
    torch_dtype=torch.float16,
    variant="fp16"
)
pipeline = pipeline.to(device)

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url).resize((512, 512))
mask_image = load_image(mask_url).resize((512, 512))

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
repainted_image.save("repainted_image.png")

unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image)
unmasked_unchanged_image.save("force_unmasked_unchanged.png")
make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2)
```

## Configure pipeline parameters

Image features - like quality and "creativity" - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let's take a look at the most important parameters and see how changing them affects the output.

### Strength

`strength` is a measure of how much noise is added to the base image, which influences how similar the output is to the base image.

* 📈 a high `strength` value means more noise is added to an image and the denoising process takes longer, but you'll get higher quality images that are more different from the base image
* 📉 a low `strength` value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-strength-0.6.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.6</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-strength-0.8.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.8</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-strength-1.0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 1.0</figcaption>
  </div>
</div>

### Guidance scale

`guidance_scale` affects how aligned the text prompt and generated image are.

* 📈 a high `guidance_scale` value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt
* 📉 a low `guidance_scale` value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt

You can use `strength` and `guidance_scale` together for more control over how expressive the model is. For example, a combination high `strength` and `guidance_scale` values gives the model the most creative freedom.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-guidance-2.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 2.5</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-guidance-7.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 7.5</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-guidance-12.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 12.5</figcaption>
  </div>
</div>

### Negative prompt

A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don't want.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
negative_prompt = "bad architecture, unstable, poor details, blurry"
image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex justify-center">
  <figure>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-negative.png" />
    <figcaption class="text-center">negative_prompt = "bad architecture, unstable, poor details, blurry"</figcaption>
  </figure>
</div>

### Padding mask crop

A method for increasing the inpainting image quality is to use the [`padding_mask_crop`](https://huggingface.co/docs/diffusers/v0.25.0/en/api/pipelines/stable_diffusion/inpaint#diffusers.StableDiffusionInpaintPipeline.__call__.padding_mask_crop) parameter. When enabled, this option crops the masked area with some user-specified padding and it'll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like [StableDiffusionUpscalePipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/upscale#diffusers.StableDiffusionUpscalePipeline).

Add the `padding_mask_crop` parameter to the pipeline call and set it to the desired padding value.

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
from PIL import Image

generator = torch.Generator(device='cuda').manual_seed(0)
pipeline = AutoPipelineForInpainting.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda')

base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png")
mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png")

image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/baseline_inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">default inpaint image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/padding_mask_crop_inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">inpaint image with `padding_mask_crop` enabled</figcaption>
  </div>
</div>

## Chained inpainting pipelines

[AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting) can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you're using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components.

### Text-to-image-to-inpaint

Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don't have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image.

Start with the text-to-image pipeline to create a castle:

```py
import torch
from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0]
```

Load the mask image of the output from above:

```py
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png")
```

And let's inpaint the masked area with a waterfall:

```py
pipeline = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

prompt = "digital painting of a fantasy waterfall, cloudy"
image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0]
make_image_grid([text2image, mask_image, image], rows=1, cols=3)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-text-chain.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">text-to-image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-text-chain-out.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">inpaint</figcaption>
  </div>
</div>

### Inpaint-to-image-to-image

You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality.

Begin by inpainting an image:

```py
import torch
from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0]

# resize image to 1024x1024 for SDXL
image_inpainting = image_inpainting.resize((1024, 1024))
```

Now let's pass the image to another inpainting pipeline with SDXL's refiner model to enhance the image details and quality:

```py
pipeline = AutoPipelineForInpainting.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0]
```

> [!TIP]
> It is important to specify `output_type="latent"` in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the [Text-to-image-to-inpaint](#text-to-image-to-inpaint) section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won't work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use [AutoencoderKL](/docs/diffusers/main/en/api/models/autoencoderkl#diffusers.AutoencoderKL).

Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the [from_pipe()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image.from_pipe) method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again.

```py
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline)
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

image = pipeline(prompt=prompt, image=image).images[0]
make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-to-image-chain.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">inpaint</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-to-image-final.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">image-to-image</figcaption>
  </div>
</div>

Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes.

## Control image generation

Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like `negative_prompt`, there are better and more efficient methods for controlling image generation.

### Prompt weighting

Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The [Compel](https://github.com/damian0815/compel) library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the [Prompt weighting](../using-diffusers/weighted_prompts) guide.

Once you've generated the embeddings, pass them to the `prompt_embeds` (and `negative_prompt_embeds` if you're using a negative prompt) parameter in the [AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting). The embeddings replace the `prompt` parameter:

```py
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16,
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel
    negative_prompt_embeds=negative_prompt_embeds, # generated from Compel
    image=init_image,
    mask_image=mask_image
).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

### ControlNet

ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it.

For example, let's condition an image with a ControlNet pretrained on inpaint images:

```py
import torch
import numpy as np
from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline
from diffusers.utils import load_image, make_image_grid

# load ControlNet
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16")

# pass ControlNet to the pipeline
pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained(
    "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16"
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

# prepare control image
def make_inpaint_condition(init_image, mask_image):
    init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0
    mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0

    assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size"
    init_image[mask_image > 0.5] = -1.0  # set as masked pixel
    init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2)
    init_image = torch.from_numpy(init_image)
    return init_image

control_image = make_inpaint_condition(init_image, mask_image)
```

Now generate an image from the base, mask and control images. You'll notice features of the base image are strongly preserved in the generated image.

```py
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0]
make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2)
```

You can take this a step further and chain it with an image-to-image pipeline to apply a new [style](https://huggingface.co/nitrosocke/elden-ring-diffusion):

```py
from diffusers import AutoPipelineForImage2Image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16,
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

prompt = "elden ring style castle" # include the token "elden ring style" in the prompt
negative_prompt = "bad architecture, deformed, disfigured, poor details"

image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0]
make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-controlnet.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">ControlNet inpaint</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint-img2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">image-to-image</figcaption>
  </div>
</div>

## Optimize

It can be difficult and slow to run diffusion models if you're resource constrained, but it doesn't have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you're using PyTorch 2.0, [scaled-dot product attention](../optimization/fp16#scaled-dot-product-attention) is automatically enabled and you don't need to do anything else. For non-PyTorch 2.0 users, you can install and use [xFormers](../optimization/xformers)'s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference.

You can also offload the model to the CPU to save even more memory:

```diff
+ pipeline.enable_xformers_memory_efficient_attention()
+ pipeline.enable_model_cpu_offload()
```

To speed-up your inference code even more, use [`torch_compile`](../optimization/fp16#torchcompile). You should wrap `torch.compile` around the most intensive component in the pipeline which is typically the UNet:

```py
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)
```

Learn more in the [Reduce memory usage](../optimization/memory) and [Accelerate inference](../optimization/fp16) guides.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/inpaint.md" />

### Kandinsky
https://huggingface.co/docs/diffusers/main/using-diffusers/kandinsky.md

# Kandinsky


The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet.

[Kandinsky 2.1](../api/pipelines/kandinsky) changes the architecture to include an image prior model ([`CLIP`](https://huggingface.co/docs/transformers/model_doc/clip)) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a [Modulating Quantized Vectors (MoVQ)](https://huggingface.co/papers/2209.09002) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images.

[Kandinsky 2.2](../api/pipelines/kandinsky_v22) improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes.

[Kandinsky 3](../api/pipelines/kandinsky3) simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses [Flan-UL2](https://huggingface.co/google/flan-ul2) to encode text, a UNet with [BigGan-deep](https://hf.co/papers/1809.11096) blocks, and [Sber-MoVQGAN](https://github.com/ai-forever/MoVQGAN) to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet.

This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more.

Before you begin, make sure you have the following libraries installed:

```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate
```

> [!WARNING]
> Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn't accept `prompt` as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts `image_embeds` during decoding.
>
> <br>
>
> Kandinsky 3 has a more concise architecture and it doesn't require a prior model. This means it's usage is identical to other diffusion models like [Stable Diffusion XL](sdxl).

## Text-to-image

To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates `negative_image_embeds` that correspond to the negative prompt `""`. For better results, you can pass an actual `negative_prompt` to the prior pipeline, but this'll increase the effective batch size of the prior pipeline by 2x.

<hfoptions id="text-to-image">
<hfoption id="Kandinsky 2.1">

```py
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
import torch

prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda")
pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda")

prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better
image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple()
```

Now pass all the prompts and embeddings to the [KandinskyPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky#diffusers.KandinskyPipeline) to generate an image:

```py
image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/cheeseburger.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
import torch

prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16).to("cuda")
pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16).to("cuda")

prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better
image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple()
```

Pass the `image_embeds` and `negative_image_embeds` to the [KandinskyV22Pipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22Pipeline) to generate an image:

```py
image = pipeline(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0]
image
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-text-to-image.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 3">

Kandinsky 3 doesn't require a prior model so you can directly load the [Kandinsky3Pipeline](/docs/diffusers/main/en/api/pipelines/kandinsky3#diffusers.Kandinsky3Pipeline) and pass a prompt to generate an image:

```py
from diffusers import Kandinsky3Pipeline
import torch

pipeline = Kandinsky3Pipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
image = pipeline(prompt).images[0]
image
```

</hfoption>
</hfoptions>

🤗 Diffusers also provides an end-to-end API with the [KandinskyCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky#diffusers.KandinskyCombinedPipeline) and [KandinskyV22CombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22CombinedPipeline), meaning you don't have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the `prior_guidance_scale` and `prior_num_inference_steps` parameters if you want.

Use the [AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image) to automatically call the combined pipelines under the hood:

<hfoptions id="text-to-image">
<hfoption id="Kandinsky 2.1">

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"

image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0]
image
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting"
negative_prompt = "low quality, bad quality"

image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0]
image
```

</hfoption>
</hfoptions>

## Image-to-image

For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline:

<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">

```py
import torch
from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline

prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
import torch
from diffusers import KandinskyV22Img2ImgPipeline, KandinskyPriorPipeline

prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyV22Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```

</hfoption>
<hfoption id="Kandinsky 3">

Kandinsky 3 doesn't require a prior model so you can directly load the image-to-image pipeline:

```py
from diffusers import Kandinsky3Img2ImgPipeline
from diffusers.utils import load_image
import torch

pipeline = Kandinsky3Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()
```

</hfoption>
</hfoptions>

Download an image to condition on:

```py
from diffusers.utils import load_image

# download image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image = original_image.resize((768, 512))
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"/>
</div>

Generate the `image_embeds` and `negative_image_embeds` with the prior pipeline:

```py
prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple()
```

Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image:

<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">

```py
from diffusers.utils import make_image_grid

image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/img2img_fantasyland.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers.utils import make_image_grid

image = pipeline(image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-image-to-image.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 3">

```py
image = pipeline(prompt, negative_prompt=negative_prompt, image=image, strength=0.75, num_inference_steps=25).images[0]
image
```

</hfoption>
</hfoptions>

🤗 Diffusers also provides an end-to-end API with the [KandinskyImg2ImgCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky#diffusers.KandinskyImg2ImgCombinedPipeline) and [KandinskyV22Img2ImgCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22Img2ImgCombinedPipeline), meaning you don't have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the `prior_guidance_scale` and `prior_num_inference_steps` parameters if you want.

Use the [AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) to automatically call the combined pipelines under the hood:

<hfoptions id="image-to-image">
<hfoption id="Kandinsky 2.1">

```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image
import torch

pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True)
pipeline.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)

original_image.thumbnail((768, 768))

image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image
import torch

pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
pipeline.enable_model_cpu_offload()

prompt = "A fantasy landscape, Cinematic lighting"
negative_prompt = "low quality, bad quality"

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)

original_image.thumbnail((768, 768))

image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0]
make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```

</hfoption>
</hfoptions>

## Inpainting

> [!WARNING]
> ⚠️ The Kandinsky models use ⬜️ **white pixels** to represent the masked area now instead of black pixels. If you are using [KandinskyInpaintPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky#diffusers.KandinskyInpaintPipeline) in production, you need to change the mask to use white pixels:
>
> ```py
> # For PIL input
> import PIL.ImageOps
> mask = PIL.ImageOps.invert(mask)
>
> # For PyTorch and NumPy input
> mask = 1 - mask
> ```

For inpainting, you'll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline:

<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">

```py
from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline
from diffusers.utils import load_image, make_image_grid
import torch
import numpy as np
from PIL import Image

prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers import KandinskyV22InpaintPipeline, KandinskyV22PriorPipeline
from diffusers.utils import load_image, make_image_grid
import torch
import numpy as np
from PIL import Image

prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = KandinskyV22InpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```

</hfoption>
</hfoptions>

Load an initial image and create a mask:

```py
init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
```

Generate the embeddings with the prior pipeline:

```py
prompt = "a hat"
prior_output = prior_pipeline(prompt)
```

Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image:

<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">

```py
output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/inpaint_cat_hat.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
output_image = pipeline(image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinskyv22-inpaint.png"/>
</div>

</hfoption>
</hfoptions>

You can also use the end-to-end [KandinskyInpaintCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky#diffusers.KandinskyInpaintCombinedPipeline) and [KandinskyV22InpaintCombinedPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22InpaintCombinedPipeline) to call the prior and decoder pipelines together under the hood. Use the [AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting) for this:

<hfoptions id="inpaint">
<hfoption id="Kandinsky 2.1">

```py
import torch
import numpy as np
from PIL import Image
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()

init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
prompt = "a hat"

output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
import torch
import numpy as np
from PIL import Image
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()

init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
mask = np.zeros((768, 768), dtype=np.float32)
# mask area above cat's head
mask[:250, 250:-250] = 1
prompt = "a hat"

output_image = pipe(prompt=prompt, image=original_image, mask_image=mask).images[0]
mask = Image.fromarray((mask*255).astype('uint8'), 'L')
make_image_grid([init_image, mask, output_image], rows=1, cols=3)
```

</hfoption>
</hfoptions>

## Interpolation

Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model's intermediate outputs. Load the prior pipeline and two images you'd like to interpolate:

<hfoptions id="interpolate">
<hfoption id="Kandinsky 2.1">

```py
from diffusers import KandinskyPriorPipeline, KandinskyPipeline
from diffusers.utils import load_image, make_image_grid
import torch

prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg")
make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
```

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline
from diffusers.utils import load_image, make_image_grid
import torch

prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png")
img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg")
make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2)
```

</hfoption>
</hfoptions>

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">a cat</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Van Gogh's Starry Night painting</figcaption>
  </div>
</div>

Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation!

```py
images_texts = ["a cat", img_1, img_2]
weights = [0.3, 0.3, 0.4]
```

Call the `interpolate` function to generate the embeddings, and then pass them to the pipeline to generate the image:

<hfoptions id="interpolate">
<hfoption id="Kandinsky 2.1">

```py
# prompt can be left empty
prompt = ""
prior_out = prior_pipeline.interpolate(images_texts, weights)

pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")

image = pipeline(prompt, **prior_out, height=768, width=768).images[0]
image
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinsky-docs/starry_cat.png"/>
</div>

</hfoption>
<hfoption id="Kandinsky 2.2">

```py
# prompt can be left empty
prompt = ""
prior_out = prior_pipeline.interpolate(images_texts, weights)

pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda")

image = pipeline(prompt, **prior_out, height=768, width=768).images[0]
image
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/kandinskyv22-interpolate.png"/>
</div>

</hfoption>
</hfoptions>

## ControlNet

> [!WARNING]
> ⚠️ ControlNet is only supported for Kandinsky 2.2!

ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image.

Let's load an image and extract it's depth map:

```py
from diffusers.utils import load_image

img = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))
img
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"/>
</div>

Then you can use the `depth-estimation` [Pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.Pipeline) from 🤗 Transformers to process the image and retrieve the depth map:

```py
import torch
import numpy as np

from transformers import pipeline

def make_hint(image, depth_estimator):
    image = depth_estimator(image)["depth"]
    image = np.array(image)
    image = image[:, :, None]
    image = np.concatenate([image, image, image], axis=2)
    detected_map = torch.from_numpy(image).float() / 255.0
    hint = detected_map.permute(2, 0, 1)
    return hint

depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```

### Text-to-image [[controlnet-text-to-image]]

Load the prior pipeline and the [KandinskyV22ControlnetPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22ControlnetPipeline):

```py
from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline

prior_pipeline = KandinskyV22PriorPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")

pipeline = KandinskyV22ControlnetPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
).to("cuda")
```

Generate the image embeddings from a prompt and negative prompt:

```py
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"

generator = torch.Generator(device="cuda").manual_seed(43)

image_emb, zero_image_emb = prior_pipeline(
    prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator
).to_tuple()
```

Finally, pass the image embeddings and the depth image to the [KandinskyV22ControlnetPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22ControlnetPipeline) to generate an image:

```py
image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0]
image
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat_text2img.png"/>
</div>

### Image-to-image [[controlnet-image-to-image]]

For image-to-image with ControlNet, you'll need to use the:

- [KandinskyV22PriorEmb2EmbPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22PriorEmb2EmbPipeline) to generate the image embeddings from a text prompt and an image
- [KandinskyV22ControlnetImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22ControlnetImg2ImgPipeline) to generate an image from the initial image and the image embeddings

Process and extract a depth map of an initial image of a cat with the `depth-estimation` [Pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.Pipeline) from 🤗 Transformers:

```py
import torch
import numpy as np

from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
from diffusers.utils import load_image
from transformers import pipeline

img = load_image(
    "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png"
).resize((768, 768))

def make_hint(image, depth_estimator):
    image = depth_estimator(image)["depth"]
    image = np.array(image)
    image = image[:, :, None]
    image = np.concatenate([image, image, image], axis=2)
    detected_map = torch.from_numpy(image).float() / 255.0
    hint = detected_map.permute(2, 0, 1)
    return hint

depth_estimator = pipeline("depth-estimation")
hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
```

Load the prior pipeline and the [KandinskyV22ControlnetImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22ControlnetImg2ImgPipeline):

```py
prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")

pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
    "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
).to("cuda")
```

Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings:

```py
prompt = "A robot, 4k photo"
negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"

generator = torch.Generator(device="cuda").manual_seed(43)

img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator)
negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
```

Now you can run the [KandinskyV22ControlnetImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/kandinsky_v22#diffusers.KandinskyV22ControlnetImg2ImgPipeline) to generate an image from the initial image and the image embeddings:

```py
image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0]
make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2)
```

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/robot_cat.png"/>
</div>

## Optimizations

Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference.

1. Enable [xFormers](../optimization/xformers) if you're using PyTorch < 2.0:

```diff
  from diffusers import DiffusionPipeline
  import torch

  pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
+ pipe.enable_xformers_memory_efficient_attention()
```

2. Enable `torch.compile` if you're using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA):

```diff
  pipe.unet.to(memory_format=torch.channels_last)
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```

This is the same as explicitly setting the attention processor to use [AttnAddedKVProcessor2_0](/docs/diffusers/main/en/api/attnprocessor#diffusers.models.attention_processor.AttnAddedKVProcessor2_0):

```py
from diffusers.models.attention_processor import AttnAddedKVProcessor2_0

pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0())
```

3. Offload the model to the CPU with [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) to avoid out-of-memory errors:

```diff
  from diffusers import DiffusionPipeline
  import torch

  pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
+ pipe.enable_model_cpu_offload()
```

4. By default, the text-to-image pipeline uses the [DDIMScheduler](/docs/diffusers/main/en/api/schedulers/ddim#diffusers.DDIMScheduler) but you can replace it with another scheduler like [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) to see how that affects the tradeoff between inference speed and image quality:

```py
from diffusers import DDPMScheduler
from diffusers import DiffusionPipeline

scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler")
pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda")
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/kandinsky.md" />

### ConsisID
https://huggingface.co/docs/diffusers/main/using-diffusers/consisid.md

# ConsisID

[ConsisID](https://github.com/PKU-YuanGroup/ConsisID) is an identity-preserving text-to-video generation model that keeps the face consistent in the generated video by frequency decomposition. The main features of ConsisID are:

- Frequency decomposition: The characteristics of the DiT architecture are analyzed from the frequency domain perspective, and based on these characteristics, a reasonable control information injection method is designed.
- Consistency training strategy: A coarse-to-fine training strategy, dynamic masking loss, and dynamic cross-face loss further enhance the model's generalization ability and identity preservation performance.
- Inference without finetuning: Previous methods required case-by-case finetuning of the input ID before inference, leading to significant time and computational costs. In contrast, ConsisID is tuning-free.

This guide will walk you through using ConsisID for use cases.

## Load Model Checkpoints

Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method.

```python
# !pip install consisid_eva_clip insightface facexlib
import torch
from diffusers import ConsisIDPipeline
from diffusers.pipelines.consisid.consisid_utils import prepare_face_models, process_face_embeddings_infer
from huggingface_hub import snapshot_download

# Download ckpts
snapshot_download(repo_id="BestWishYsh/ConsisID-preview", local_dir="BestWishYsh/ConsisID-preview")

# Load face helper model to preprocess input face image
face_helper_1, face_helper_2, face_clip_model, face_main_model, eva_transform_mean, eva_transform_std = prepare_face_models("BestWishYsh/ConsisID-preview", device="cuda", dtype=torch.bfloat16)

# Load consisid base model
pipe = ConsisIDPipeline.from_pretrained("BestWishYsh/ConsisID-preview", torch_dtype=torch.bfloat16)
pipe.to("cuda")
```

## Identity-Preserving Text-to-Video

For identity-preserving text-to-video, pass a text prompt and an image contain clear face (e.g., preferably half-body or full-body). By default, ConsisID generates a 720x480 video for the best results.

```python
from diffusers.utils import export_to_video

prompt = "The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel."
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_input.png?download=true"

id_cond, id_vit_hidden, image, face_kps = process_face_embeddings_infer(face_helper_1, face_clip_model, face_helper_2, eva_transform_mean, eva_transform_std, face_main_model, "cuda", torch.bfloat16, image, is_align_face=True)

video = pipe(image=image, prompt=prompt, num_inference_steps=50, guidance_scale=6.0, use_dynamic_cfg=False, id_vit_hidden=id_vit_hidden, id_cond=id_cond, kps_cond=face_kps, generator=torch.Generator("cuda").manual_seed(42))
export_to_video(video.frames[0], "output.mp4", fps=8)
```
<table>
  <tr>
    <th style="text-align: center;">Face Image</th>
    <th style="text-align: center;">Video</th>
    <th style="text-align: center;">Description</th
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_image_0.png?download=true" style="height: auto; width: 600px;"></td>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_output_0.gif?download=true" style="height: auto; width: 2000px;"></td>
    <td>The video, in a beautifully crafted animated style, features a confident woman riding a horse through a lush forest clearing. Her expression is focused yet serene as she adjusts her wide-brimmed hat with a practiced hand. She wears a flowy bohemian dress, which moves gracefully with the rhythm of the horse, the fabric flowing fluidly in the animated motion. The dappled sunlight filters through the trees, casting soft, painterly patterns on the forest floor. Her posture is poised, showing both control and elegance as she guides the horse with ease. The animation's gentle, fluid style adds a dreamlike quality to the scene, with the woman’s calm demeanor and the peaceful surroundings evoking a sense of freedom and harmony.</td>
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_image_1.png?download=true" style="height: auto; width: 600px;"></td>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_output_1.gif?download=true" style="height: auto; width: 2000px;"></td>
    <td>The video, in a captivating animated style, shows a woman standing in the center of a snowy forest, her eyes narrowed in concentration as she extends her hand forward. She is dressed in a deep blue cloak, her breath visible in the cold air, which is rendered with soft, ethereal strokes. A faint smile plays on her lips as she summons a wisp of ice magic, watching with focus as the surrounding trees and ground begin to shimmer and freeze, covered in delicate ice crystals. The animation’s fluid motion brings the magic to life, with the frost spreading outward in intricate, sparkling patterns. The environment is painted with soft, watercolor-like hues, enhancing the magical, dreamlike atmosphere. The overall mood is serene yet powerful, with the quiet winter air amplifying the delicate beauty of the frozen scene.</td>
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_image_2.png?download=true" style="height: auto; width: 600px;"></td>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_output_2.gif?download=true" style="height: auto; width: 2000px;"></td>
    <td>The animation features a whimsical portrait of a balloon seller standing in a gentle breeze, captured with soft, hazy brushstrokes that evoke the feel of a serene spring day. His face is framed by a gentle smile, his eyes squinting slightly against the sun, while a few wisps of hair flutter in the wind. He is dressed in a light, pastel-colored shirt, and the balloons around him sway with the wind, adding a sense of playfulness to the scene. The background blurs softly, with hints of a vibrant market or park, enhancing the light-hearted, yet tender mood of the moment.</td>
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_image_3.png?download=true" style="height: auto; width: 600px;"></td>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_output_3.gif?download=true" style="height: auto; width: 2000px;"></td>
    <td>The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel.</td>
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_image_4.png?download=true" style="height: auto; width: 600px;"></td>
    <td><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/consisid/consisid_output_4.gif?download=true" style="height: auto; width: 2000px;"></td>
    <td>The video features a baby wearing a bright superhero cape, standing confidently with arms raised in a powerful pose. The baby has a determined look on their face, with eyes wide and lips pursed in concentration, as if ready to take on a challenge. The setting appears playful, with colorful toys scattered around and a soft rug underfoot, while sunlight streams through a nearby window, highlighting the fluttering cape and adding to the impression of heroism. The overall atmosphere is lighthearted and fun, with the baby's expressions capturing a mix of innocence and an adorable attempt at bravery, as if truly ready to save the day.</td>
  </tr>
</table>

## Resources

Learn more about ConsisID with the following resources.
- A [video](https://www.youtube.com/watch?v=PhlgC-bI5SQ) demonstrating ConsisID's main features.
- The research paper, [Identity-Preserving Text-to-Video Generation by Frequency Decomposition](https://hf.co/papers/2411.17440) for more details.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/consisid.md" />

### Create a server
https://huggingface.co/docs/diffusers/main/using-diffusers/create_a_server.md

# Create a server

Diffusers' pipelines can be used as an inference engine for a server. It supports concurrent and multithreaded requests to generate images that may be requested by multiple users at the same time.

This guide will show you how to use the [StableDiffusion3Pipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3#diffusers.StableDiffusion3Pipeline) in a server, but feel free to use any pipeline you want.


Start by navigating to the `examples/server` folder and installing all of the dependencies.

```py
pip install .
pip install -f requirements.txt
```

Launch the server with the following command.

```py
python server.py
```

The server is accessed at http://localhost:8000. You can curl this model with the following command.
```
curl -X POST -H "Content-Type: application/json" --data '{"model": "something", "prompt": "a kitten in front of a fireplace"}' http://localhost:8000/v1/images/generations
```

If you need to upgrade some dependencies, you can use either [pip-tools](https://github.com/jazzband/pip-tools) or [uv](https://github.com/astral-sh/uv). For example, upgrade the dependencies with `uv` using the following command.

```
uv pip compile requirements.in -o requirements.txt
```


The server is built with [FastAPI](https://fastapi.tiangolo.com/async/). The endpoint for `v1/images/generations` is shown below.
```py
@app.post("/v1/images/generations")
async def generate_image(image_input: TextToImageInput):
    try:
        loop = asyncio.get_event_loop()
        scheduler = shared_pipeline.pipeline.scheduler.from_config(shared_pipeline.pipeline.scheduler.config)
        pipeline = StableDiffusion3Pipeline.from_pipe(shared_pipeline.pipeline, scheduler=scheduler)
        generator = torch.Generator(device="cuda")
        generator.manual_seed(random.randint(0, 10000000))
        output = await loop.run_in_executor(None, lambda: pipeline(image_input.prompt, generator = generator))
        logger.info(f"output: {output}")
        image_url = save_image(output.images[0])
        return {"data": [{"url": image_url}]}
    except Exception as e:
        if isinstance(e, HTTPException):
            raise e
        elif hasattr(e, 'message'):
            raise HTTPException(status_code=500, detail=e.message + traceback.format_exc())
        raise HTTPException(status_code=500, detail=str(e) + traceback.format_exc())
```
The `generate_image` function is defined as asynchronous with the [async](https://fastapi.tiangolo.com/async/) keyword so that FastAPI knows that whatever is happening in this function won't necessarily return a result right away. Once it hits some point in the function that it needs to await some other [Task](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task), the main thread goes back to answering other HTTP requests. This is shown in the code below with the [await](https://fastapi.tiangolo.com/async/#async-and-await) keyword.
```py
output = await loop.run_in_executor(None, lambda: pipeline(image_input.prompt, generator = generator))
```
At this point, the execution of the pipeline function is placed onto a [new thread](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor), and the main thread performs other things until a result is returned from the `pipeline`.

Another important aspect of this implementation is creating a `pipeline` from `shared_pipeline`. The goal behind this is to avoid loading the underlying model more than once onto the GPU while still allowing for each new request that is running on a separate thread to have its own generator and scheduler. The scheduler, in particular, is not thread-safe, and it will cause errors like: `IndexError: index 21 is out of bounds for dimension 0 with size 21` if you try to use the same scheduler across multiple threads.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/create_a_server.md" />

### Controlled generation
https://huggingface.co/docs/diffusers/main/using-diffusers/controlling_generation.md

# Controlled generation

Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed.

Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject's pose.

Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic.

We will document some of the techniques `diffusers` supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don't hesitate to open a discussion on the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or a [GitHub issue](https://github.com/huggingface/diffusers/issues).

We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources.

Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion.

Unless otherwise mentioned, these are techniques that work with existing models and don't require their own weights.

1. [InstructPix2Pix](#instruct-pix2pix)
2. [Pix2Pix Zero](#pix2pix-zero)
3. [Attend and Excite](#attend-and-excite)
4. [Semantic Guidance](#semantic-guidance-sega)
5. [Self-attention Guidance](#self-attention-guidance-sag)
6. [Depth2Image](#depth2image)
7. [MultiDiffusion Panorama](#multidiffusion-panorama)
8. [DreamBooth](#dreambooth)
9. [Textual Inversion](#textual-inversion)
10. [ControlNet](#controlnet)
11. [Prompt Weighting](#prompt-weighting)
12. [Custom Diffusion](#custom-diffusion)
13. [Model Editing](#model-editing)
14. [DiffEdit](#diffedit)
15. [T2I-Adapter](#t2i-adapter)
16. [FABRIC](#fabric)

For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training.

|                     **Method**                      | **Inference only** | **Requires training /<br> fine-tuning** |                                          **Comments**                                           |
| :-------------------------------------------------: | :----------------: | :-------------------------------------: | :---------------------------------------------------------------------------------------------: |
|        [InstructPix2Pix](#instruct-pix2pix)        |         ✅         |                   ❌                    | Can additionally be<br>fine-tuned for better <br>performance on specific <br>edit instructions. |
|            [Pix2Pix Zero](#pix2pix-zero)            |         ✅         |                   ❌                    |                                                                                                 |
|       [Attend and Excite](#attend-and-excite)       |         ✅         |                   ❌                    |                                                                                                 |
|       [Semantic Guidance](#semantic-guidance-sega)       |         ✅         |                   ❌                    |                                                                                                 |
| [Self-attention Guidance](#self-attention-guidance-sag) |         ✅         |                   ❌                    |                                                                                                 |
|             [Depth2Image](#depth2image)             |         ✅         |                   ❌                    |                                                                                                 |
| [MultiDiffusion Panorama](#multidiffusion-panorama) |         ✅         |                   ❌                    |                                                                                                 |
|              [DreamBooth](#dreambooth)              |         ❌         |                   ✅                    |                                                                                                 |
|       [Textual Inversion](#textual-inversion)       |         ❌         |                   ✅                    |                                                                                                 |
|              [ControlNet](#controlnet)              |         ✅         |                   ❌                    |             A ControlNet can be <br>trained/fine-tuned on<br>a custom conditioning.             |
|        [Prompt Weighting](#prompt-weighting)        |         ✅         |                   ❌                    |                                                                                                 |
|        [Custom Diffusion](#custom-diffusion)        |         ❌         |                   ✅                    |                                                                                                 |
|           [Model Editing](#model-editing)           |         ✅         |                   ❌                    |                                                                                                 |
|                [DiffEdit](#diffedit)                |         ✅         |                   ❌                    |                                                                                                 |
|             [T2I-Adapter](#t2i-adapter)             |         ✅         |                   ❌                    |                                                                                                 |
|                [Fabric](#fabric)                    |         ✅         |                   ❌                    |                                                                                                 |
## InstructPix2Pix

[Paper](https://huggingface.co/papers/2211.09800)

[InstructPix2Pix](../api/pipelines/pix2pix) is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image.
InstructPix2Pix has been explicitly trained to work well with [InstructGPT](https://openai.com/blog/instruction-following/)-like prompts.

## Attend and Excite

[Paper](https://huggingface.co/papers/2301.13826)

[Attend and Excite](../api/pipelines/attend_and_excite) allows subjects in the prompt to be faithfully represented in the final image.

A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens.

Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual [StableDiffusionPipeline](../api/pipelines/stable_diffusion/text2img).

## Semantic Guidance (SEGA)

[Paper](https://huggingface.co/papers/2301.12247)

[SEGA](../api/pipelines/semantic_stable_diffusion) allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait.

Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively.

Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization.

## Self-attention Guidance (SAG)

[Paper](https://huggingface.co/papers/2210.00939)

[Self-attention Guidance](../api/pipelines/self_attention_guidance) improves the general quality of images.

SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps.

## Depth2Image

[Project](https://huggingface.co/stabilityai/stable-diffusion-2-depth)

[Depth2Image](../api/pipelines/stable_diffusion/depth2img) is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation.

It conditions on a monocular depth estimate of the original image.

## MultiDiffusion Panorama

[Paper](https://huggingface.co/papers/2302.08113)

[MultiDiffusion Panorama](../api/pipelines/panorama) defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas).

## Fine-tuning your own models

In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data.

## DreamBooth

[Project](https://dreambooth.github.io/)

[DreamBooth](../training/dreambooth) fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles.

## Textual Inversion

[Paper](https://huggingface.co/papers/2208.01618)

[Textual Inversion](../training/text_inversion) fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style.

## ControlNet

[Paper](https://huggingface.co/papers/2302.05543)

[ControlNet](../api/pipelines/controlnet) is an auxiliary network which adds an extra condition.
There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles,
depth maps, and semantic segmentations.

## Prompt Weighting

[Prompt weighting](../using-diffusers/weighted_prompts) is a simple technique that puts more attention weight on certain parts of the text
input.

## Custom Diffusion

[Paper](https://huggingface.co/papers/2212.04488)

[Custom Diffusion](../training/custom_diffusion) only fine-tunes the cross-attention maps of a pre-trained
text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports
multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to
teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the
concept(s) of interest.

## DiffEdit

[Paper](https://huggingface.co/papers/2210.11427)

[DiffEdit](../api/pipelines/diffedit) allows for semantic editing of input images along with
input prompts while preserving the original input images as much as possible.

## T2I-Adapter

[Paper](https://huggingface.co/papers/2302.08453)

[T2I-Adapter](../api/pipelines/stable_diffusion/adapter) is an auxiliary network which adds an extra condition.
There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch,
depth maps, and semantic segmentations.

## Fabric

[Paper](https://huggingface.co/papers/2307.10159)

[Fabric](https://github.com/huggingface/diffusers/tree/442017ccc877279bcf24fbe92f92d3d0def191b6/examples/community#stable-diffusion-fabric-pipeline) is a training-free
approach applicable to a wide range of popular diffusion models, which exploits
the self-attention layer present in the most widely used architectures to condition
the diffusion process on a set of feedback images.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/controlling_generation.md" />

### Understanding pipelines, models and schedulers
https://huggingface.co/docs/diffusers/main/using-diffusers/write_own_pipeline.md

# Understanding pipelines, models and schedulers


🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems.

In this tutorial, you'll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline.

## Deconstruct a basic pipeline

A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image:

```py
>>> from diffusers import DDPMPipeline

>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda")
>>> image = ddpm(num_inference_steps=25).images[0]
>>> image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ddpm-cat.png" alt="Image of cat created from DDPMPipeline"/>
</div>

That was super easy, but how did the pipeline do that? Let's breakdown the pipeline and take a look at what's happening under the hood.

In the example above, the pipeline contains a [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) model and a [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler). The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the *noise residual* and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps.

To recreate the pipeline with the model and scheduler separately, let's write our own denoising process.

1. Load the model and scheduler:

```py
>>> from diffusers import DDPMScheduler, UNet2DModel

>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda")
```

2. Set the number of timesteps to run the denoising process for:

```py
>>> scheduler.set_timesteps(50)
```

3. Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you'll iterate over this tensor to denoise an image:

```py
>>> scheduler.timesteps
tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720,
    700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440,
    420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160,
    140, 120, 100,  80,  60,  40,  20,   0])
```

4. Create some random noise with the same shape as the desired output:

```py
>>> import torch

>>> sample_size = model.config.sample_size
>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
```

5. Now write a loop to iterate over the timesteps. At each timestep, the model does a [UNet2DModel.forward()](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel.forward) pass and returns the noisy residual. The scheduler's [step()](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler.step) method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it'll repeat until it reaches the end of the `timesteps` array.

```py
>>> input = noise

>>> for t in scheduler.timesteps:
...     with torch.no_grad():
...         noisy_residual = model(input, t).sample
...     previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
...     input = previous_noisy_sample
```

This is the entire denoising process, and you can use this same pattern to write any diffusion system.

6. The last step is to convert the denoised output into an image:

```py
>>> from PIL import Image
>>> import numpy as np

>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze()
>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy()
>>> image = Image.fromarray(image)
>>> image
```

In the next section, you'll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You'll initialize the necessary components, and set the number of timesteps to create a `timestep` array. The `timestep` array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the `timestep`'s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the `timestep` array.

Let's try it out!

## Deconstruct the Stable Diffusion pipeline

Stable Diffusion is a text-to-image *latent diffusion* model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder converts the compressed representation back into an image. For text-to-image models, you'll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler.

As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models.

> [!TIP]
> 💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models work.

Now that you know what you need for the Stable Diffusion pipeline, load all these components with the [from_pretrained()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.from_pretrained) method. You can find them in the pretrained [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) checkpoint, and each component is stored in a separate subfolder:

```py
>>> from PIL import Image
>>> import torch
>>> from transformers import CLIPTextModel, CLIPTokenizer
>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler

>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True)
>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer")
>>> text_encoder = CLIPTextModel.from_pretrained(
...     "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True
... )
>>> unet = UNet2DConditionModel.from_pretrained(
...     "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True
... )
```

Instead of the default [PNDMScheduler](/docs/diffusers/main/en/api/schedulers/pndm#diffusers.PNDMScheduler), exchange it for the [UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler) to see how easy it is to plug a different scheduler in:

```py
>>> from diffusers import UniPCMultistepScheduler

>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
```

To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights:

```py
>>> torch_device = "cuda"
>>> vae.to(torch_device)
>>> text_encoder.to(torch_device)
>>> unet.to(torch_device)
```

### Create text embeddings

The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt.

> [!TIP]
> 💡 The `guidance_scale` parameter determines how much weight should be given to the prompt when generating an image.

Feel free to choose any prompt you like if you want to generate something else!

```py
>>> prompt = ["a photograph of an astronaut riding a horse"]
>>> height = 512  # default height of Stable Diffusion
>>> width = 512  # default width of Stable Diffusion
>>> num_inference_steps = 25  # Number of denoising steps
>>> guidance_scale = 7.5  # Scale for classifier-free guidance
>>> generator = torch.manual_seed(0)  # Seed generator to create the initial latent noise
>>> batch_size = len(prompt)
```

Tokenize the text and generate the embeddings from the prompt:

```py
>>> text_input = tokenizer(
...     prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt"
... )

>>> with torch.no_grad():
...     text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0]
```

You'll also need to generate the *unconditional text embeddings* which are the embeddings for the padding token. These need to have the same shape (`batch_size` and `seq_length`) as the conditional `text_embeddings`:

```py
>>> max_length = text_input.input_ids.shape[-1]
>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt")
>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0]
```

Let's concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes:

```py
>>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
```

### Create random noise

Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it'll be gradually denoised. At this point, the `latent` image is smaller than the final image size but that's okay though because the model will transform it into the final 512x512 image dimensions later.

> [!TIP]
> 💡 The height and width are divided by 8 because the `vae` model has 3 down-sampling layers. You can check by running the following:
>
> ```py
> 2 ** (len(vae.config.block_out_channels) - 1) == 8
> ```

```py
>>> latents = torch.randn(
...     (batch_size, unet.config.in_channels, height // 8, width // 8),
...     generator=generator,
...     device=torch_device,
... )
```

### Denoise the image

Start by scaling the input with the initial noise distribution, *sigma*, the noise scale value, which is required for improved schedulers like [UniPCMultistepScheduler](/docs/diffusers/main/en/api/schedulers/unipc#diffusers.UniPCMultistepScheduler):

```py
>>> latents = latents * scheduler.init_noise_sigma
```

The last step is to create the denoising loop that'll progressively transform the pure noise in `latents` to an image described by your prompt. Remember, the denoising loop needs to do three things:

1. Set the scheduler's timesteps to use during denoising.
2. Iterate over the timesteps.
3. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample.

```py
>>> from tqdm.auto import tqdm

>>> scheduler.set_timesteps(num_inference_steps)

>>> for t in tqdm(scheduler.timesteps):
...     # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes.
...     latent_model_input = torch.cat([latents] * 2)

...     latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t)

...     # predict the noise residual
...     with torch.no_grad():
...         noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample

...     # perform guidance
...     noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
...     noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)

...     # compute the previous noisy sample x_t -> x_t-1
...     latents = scheduler.step(noise_pred, t, latents).prev_sample
```

### Decode the image

The final step is to use the `vae` to decode the latent representation into an image and get the decoded output with `sample`:

```py
# scale and decode the image latents with vae
latents = 1 / 0.18215 * latents
with torch.no_grad():
    image = vae.decode(latents).sample
```

Lastly, convert the image to a `PIL.Image` to see your generated image!

```py
>>> image = (image / 2 + 0.5).clamp(0, 1).squeeze()
>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy()
>>> image = Image.fromarray(image)
>>> image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/blog/assets/98_stable_diffusion/stable_diffusion_k_lms.png"/>
</div>

## Next steps

From basic to complex pipelines, you've seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler's timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample.

This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers.

For your next steps, feel free to:

* Learn how to [build and contribute a pipeline](../conceptual/contribution) to 🧨 Diffusers. We can't wait and see what you'll come up with!
* Explore [existing pipelines](../api/pipelines/overview) in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/write_own_pipeline.md" />

### Perturbed-Attention Guidance
https://huggingface.co/docs/diffusers/main/using-diffusers/pag.md

# Perturbed-Attention Guidance

[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules. PAG is designed to progressively enhance the structure of synthesized samples throughout the denoising process by considering the self-attention mechanisms' ability to capture structural information. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, and guiding the denoising process away from these degraded samples.

This guide will show you how to use PAG for various tasks and use cases.


## General tasks

You can apply PAG to the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) for tasks such as text-to-image, image-to-image, and inpainting. To enable PAG for a specific task, load the pipeline using the [AutoPipeline](../api/pipelines/auto_pipeline) API with the `enable_pag=True` flag and the `pag_applied_layers` argument.

> [!TIP]
> 🤗 Diffusers currently only supports using PAG with selected SDXL pipelines and [PixArtSigmaPAGPipeline](/docs/diffusers/main/en/api/pipelines/pag#diffusers.PixArtSigmaPAGPipeline). But feel free to open a [feature request](https://github.com/huggingface/diffusers/issues/new/choose) if you want to add PAG support to a new pipeline!

<hfoptions id="tasks">
<hfoption id="Text-to-image">

```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    enable_pag=True,
    pag_applied_layers=["mid"],
    torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```

> [!TIP]
> The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. Additionally, you can use `set_pag_applied_layers` method to update these layers after the pipeline has been created. Check out the [pag_applied_layers](#pag_applied_layers) section to learn more about applying PAG to other layers.

If you already have a pipeline created and loaded, you can enable PAG on it using the `from_pipe` API with the `enable_pag` flag. Internally, a PAG pipeline is created based on the pipeline and task you specified. In the example below, since we used `AutoPipelineForText2Image` and passed a `StableDiffusionXLPipeline`, a `StableDiffusionXLPAGPipeline` is created accordingly. Note that this does not require additional memory, and you will have both `StableDiffusionXLPipeline` and  `StableDiffusionXLPAGPipeline` loaded and ready to use. You can read more about the `from_pipe` API and how to reuse pipelines in diffuser [here](https://huggingface.co/docs/diffusers/using-diffusers/loading#reuse-a-pipeline).

```py
pipeline_sdxl = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipeline = AutoPipelineForText2Image.from_pipe(pipeline_sdxl, enable_pag=True)
```

To generate an image, you will also need to pass a `pag_scale`. When `pag_scale` increases, images gain more semantically coherent structures and exhibit fewer artifacts. However overly large guidance scale can lead to smoother textures and slight saturation in the images, similarly to CFG. `pag_scale=3.0` is used in the official demo and works well in most of the use cases, but feel free to experiment and select the appropriate value according to your needs! PAG is disabled when `pag_scale=0`.

```py
prompt = "an insect robot preparing a delicious meal, anime style"

for pag_scale in [0.0, 3.0]:
    generator = torch.Generator(device="cpu").manual_seed(0)
    images = pipeline(
        prompt=prompt,
        num_inference_steps=25,
        guidance_scale=7.0,
        generator=generator,
        pag_scale=pag_scale,
    ).images
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_cfg_7.0_mid.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_mid.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
  </div>
</div>

</hfoption>
<hfoption id="Image-to-image">

You can use PAG with image-to-image pipelines.

```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    enable_pag=True,
    pag_applied_layers=["mid"],
    torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```

If you already have a image-to-image pipeline and would like enable PAG on it, you can run this

```py
pipeline_t2i = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_t2i, enable_pag=True)
```

It is also very easy to directly switch from a text-to-image pipeline to PAG enabled image-to-image pipeline

```py
pipeline_pag = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_t2i, enable_pag=True)
```

If you have a PAG enabled text-to-image pipeline, you can directly switch to a image-to-image pipeline with PAG still enabled

```py
pipeline_pag = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", enable_pag=True, torch_dtype=torch.float16)
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_t2i)
```

Now let's generate an image!

```py
pag_scales =  4.0
guidance_scales = 7.0

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
init_image = load_image(url)
prompt = "a dog catching a frisbee in the jungle"

generator = torch.Generator(device="cpu").manual_seed(0)
image = pipeline(
    prompt,
    image=init_image,
    strength=0.8,
    guidance_scale=guidance_scale,
    pag_scale=pag_scale,
    generator=generator).images[0]
```

</hfoption>
<hfoption id="Inpainting">

```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image
import torch

pipeline = AutoPipelineForInpainting.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    enable_pag=True,
    torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```

You can enable PAG on an existing inpainting pipeline like this

```py
pipeline_inpaint = AutoPipelineForInpainting.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipeline = AutoPipelineForInpainting.from_pipe(pipeline_inpaint, enable_pag=True)
```

This still works when your pipeline has a different task:

```py
pipeline_t2i = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipeline = AutoPipelineForInpaiting.from_pipe(pipeline_t2i, enable_pag=True)
```

Let's generate an image!

```py
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = load_image(img_url).convert("RGB")
mask_image = load_image(mask_url).convert("RGB")

prompt = "A majestic tiger sitting on a bench"

pag_scales =  3.0
guidance_scales = 7.5

generator = torch.Generator(device="cpu").manual_seed(1)
images = pipeline(
    prompt=prompt,
    image=init_image,
    mask_image=mask_image,
    strength=0.8,
    num_inference_steps=50,
    guidance_scale=guidance_scale,
    generator=generator,
    pag_scale=pag_scale,
).images
images[0]
```
</hfoption>
</hfoptions>

## PAG with ControlNet

To use PAG with ControlNet, first create a `controlnet`. Then, pass the `controlnet` and other PAG arguments to the `from_pretrained` method of the AutoPipeline for the specified task.

```py
from diffusers import AutoPipelineForText2Image, ControlNetModel
import torch

controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
)

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    enable_pag=True,
    pag_applied_layers="mid",
    torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
```

> [!TIP]
> If you already have a controlnet pipeline and want to enable PAG, you can use the `from_pipe` API: `AutoPipelineForText2Image.from_pipe(pipeline_controlnet, enable_pag=True)`

You can use the pipeline in the same way you normally use ControlNet pipelines, with the added option to specify a `pag_scale` parameter. Note that PAG works well for unconditional generation. In this example, we will generate an image without a prompt.

```py
from diffusers.utils import load_image
canny_image = load_image(
    "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_control_input.png"
)

for pag_scale in [0.0, 3.0]:
    generator = torch.Generator(device="cpu").manual_seed(1)
    images = pipeline(
        prompt="",
        controlnet_conditioning_scale=controlnet_conditioning_scale,
        image=canny_image,
        num_inference_steps=50,
        guidance_scale=0,
        generator=generator,
        pag_scale=pag_scale,
    ).images
    images[0]
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_controlnet.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_controlnet.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
  </div>
</div>

## PAG with IP-Adapter

[IP-Adapter](https://hf.co/papers/2308.06721) is a popular model that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. You can enable PAG on a pipeline with IP-Adapter loaded.

```py
from diffusers import AutoPipelineForText2Image
from diffusers.utils import load_image
from transformers import CLIPVisionModelWithProjection
import torch

image_encoder = CLIPVisionModelWithProjection.from_pretrained(
    "h94/IP-Adapter",
    subfolder="models/image_encoder",
    torch_dtype=torch.float16
)

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    image_encoder=image_encoder,
    enable_pag=True,
    torch_dtype=torch.float16
).to("cuda")

pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.bin")

pag_scales = 5.0
ip_adapter_scales = 0.8

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")

pipeline.set_ip_adapter_scale(ip_adapter_scale)
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
    prompt="a polar bear sitting in a chair drinking a milkshake",
    ip_adapter_image=image,
    negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
    num_inference_steps=25,
    guidance_scale=3.0,
    generator=generator,
    pag_scale=pag_scale,
).images
images[0]

```

PAG reduces artifacts and improves the overall compposition.

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_ipa_0.8.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_5.0_ipa_0.8.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
  </div>
</div>


## Configure parameters

### pag_applied_layers

The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. By default, it applies only to the mid blocks. Changing this setting will significantly impact the output. You can use the `set_pag_applied_layers` method to adjust the PAG layers after the pipeline is created, helping you find the optimal layers for your model.

As an example, here is the images generated with `pag_layers = ["down.block_2"]` and `pag_layers = ["down.block_2", "up.block_1.attentions_0"]`

```py
prompt = "an insect robot preparing a delicious meal, anime style"
pipeline.set_pag_applied_layers(pag_layers)
generator = torch.Generator(device="cpu").manual_seed(0)
images = pipeline(
    prompt=prompt,
    num_inference_steps=25,
    guidance_scale=guidance_scale,
    generator=generator,
    pag_scale=pag_scale,
).images
images[0]
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2_up1a0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2 + up.block1.attentions_0</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2</figcaption>
  </div>
</div>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/pag.md" />

### DiffEdit
https://huggingface.co/docs/diffusers/main/using-diffusers/diffedit.md

# DiffEdit


Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps:

1. the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text
2. the input image is encoded into latent space with DDIM
3. the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image

This guide will show you how to use DiffEdit to edit images without manually creating a mask.

Before you begin, make sure you have the following libraries installed:

```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate
```

The [StableDiffusionDiffEditPipeline](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline) requires an image mask and a set of partially inverted latents. The image mask is generated from the [generate_mask()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.generate_mask) function, and includes two parameters, `source_prompt` and `target_prompt`. These parameters determine what to edit in the image. For example, if you want to change a bowl of *fruits* to a bowl of *pears*, then:

```py
source_prompt = "a bowl of fruits"
target_prompt = "a bowl of pears"
```

The partially inverted latents are generated from the [invert()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.invert) function, and it is generally a good idea to include a `prompt` or *caption* describing the image to help guide the inverse latent sampling process. The caption can often be your `source_prompt`, but feel free to experiment with other text descriptions!

Let's load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage:

```py
import torch
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline

pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1",
    torch_dtype=torch.float16,
    safety_checker=None,
    use_safetensors=True,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()
```

Load the image to edit:

```py
from diffusers.utils import load_image, make_image_grid

img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
raw_image
```

Use the [generate_mask()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.generate_mask) function to generate the image mask. You'll need to pass it the `source_prompt` and `target_prompt` to specify what to edit in the image:

```py
from PIL import Image

source_prompt = "a bowl of fruits"
target_prompt = "a basket of pears"
mask_image = pipeline.generate_mask(
    image=raw_image,
    source_prompt=source_prompt,
    target_prompt=target_prompt,
)
Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
```

Next, create the inverted latents and pass it a caption describing the image:

```py
inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents
```

Finally, pass the image mask and inverted latents to the pipeline. The `target_prompt` becomes the `prompt` now, and the `source_prompt` is used as the `negative_prompt`:

```py
output_image = pipeline(
    prompt=target_prompt,
    mask_image=mask_image,
    image_latents=inv_latents,
    negative_prompt=source_prompt,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/assets/target.png?raw=true"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">edited image</figcaption>
  </div>
</div>

## Generate source and target embeddings

The source and target embeddings can be automatically generated with the [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) model instead of creating them manually.

Load the Flan-T5 model and tokenizer from the 🤗 Transformers library:

```py
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration

tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16)
```

Provide some initial text to prompt the model to generate the source and target prompts.

```py
source_concept = "bowl"
target_concept = "basket"

source_text = f"Provide a caption for images containing a {source_concept}. "
"The captions should be in English and should be no longer than 150 characters."

target_text = f"Provide a caption for images containing a {target_concept}. "
"The captions should be in English and should be no longer than 150 characters."
```

Next, create a utility function to generate the prompts:

```py
@torch.no_grad()
def generate_prompts(input_prompt):
    input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")

    outputs = model.generate(
        input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
    )
    return tokenizer.batch_decode(outputs, skip_special_tokens=True)

source_prompts = generate_prompts(source_text)
target_prompts = generate_prompts(target_text)
print(source_prompts)
print(target_prompts)
```

> [!TIP]
> Check out the [generation strategy](https://huggingface.co/docs/transformers/main/en/generation_strategies) guide if you're interested in learning more about strategies for generating different quality text.

Load the text encoder model used by the [StableDiffusionDiffEditPipeline](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline) to encode the text. You'll use the text encoder to compute the text embeddings:

```py
import torch
from diffusers import StableDiffusionDiffEditPipeline

pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()

@torch.no_grad()
def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"):
    embeddings = []
    for sent in sentences:
        text_inputs = tokenizer(
            sent,
            padding="max_length",
            max_length=tokenizer.model_max_length,
            truncation=True,
            return_tensors="pt",
        )
        text_input_ids = text_inputs.input_ids
        prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
        embeddings.append(prompt_embeds)
    return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)

source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder)
target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder)
```

Finally, pass the embeddings to the [generate_mask()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.generate_mask) and [invert()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.invert) functions, and pipeline to generate the image:

```diff
  from diffusers import DDIMInverseScheduler, DDIMScheduler
  from diffusers.utils import load_image, make_image_grid
  from PIL import Image

  pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
  pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)

  img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
  raw_image = load_image(img_url).resize((768, 768))

  mask_image = pipeline.generate_mask(
      image=raw_image,
-     source_prompt=source_prompt,
-     target_prompt=target_prompt,
+     source_prompt_embeds=source_embeds,
+     target_prompt_embeds=target_embeds,
  )

  inv_latents = pipeline.invert(
-     prompt=source_prompt,
+     prompt_embeds=source_embeds,
      image=raw_image,
  ).latents

  output_image = pipeline(
      mask_image=mask_image,
      image_latents=inv_latents,
-     prompt=target_prompt,
-     negative_prompt=source_prompt,
+     prompt_embeds=target_embeds,
+     negative_prompt_embeds=source_embeds,
  ).images[0]
  mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L")
  make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```

## Generate a caption for inversion

While you can use the `source_prompt` as a caption to help generate the partially inverted latents, you can also use the [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) model to automatically generate a caption.

Load the BLIP model and processor from the 🤗 Transformers library:

```py
import torch
from transformers import BlipForConditionalGeneration, BlipProcessor

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True)
```

Create a utility function to generate a caption from the input image:

```py
@torch.no_grad()
def generate_caption(images, caption_generator, caption_processor):
    text = "a photograph of"

    inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype)
    caption_generator.to("cuda")
    outputs = caption_generator.generate(**inputs, max_new_tokens=128)

    # offload caption generator
    caption_generator.to("cpu")

    caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0]
    return caption
```

Load an input image and generate a caption for it using the `generate_caption` function:

```py
from diffusers.utils import load_image

img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
caption = generate_caption(raw_image, model, processor)
```

<div class="flex justify-center">
    <figure>
        <img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
        <figcaption class="text-center">generated caption: "a photograph of a bowl of fruit on a table"</figcaption>
    </figure>
</div>

Now you can drop the caption into the [invert()](/docs/diffusers/main/en/api/pipelines/diffedit#diffusers.StableDiffusionDiffEditPipeline.invert) function to generate the partially inverted latents!


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/diffedit.md" />

### Marigold Computer Vision
https://huggingface.co/docs/diffusers/main/using-diffusers/marigold_usage.md

# Marigold Computer Vision

**Marigold** is a diffusion-based [method](https://huggingface.co/papers/2312.02145) and a collection of [pipelines](../api/pipelines/marigold) designed for 
dense computer vision tasks, including **monocular depth prediction**, **surface normals estimation**, and **intrinsic 
image decomposition**.

This guide will walk you through using Marigold to generate fast and high-quality predictions for images and videos.

Each pipeline is tailored for a specific computer vision task, processing an input RGB image and generating a 
corresponding prediction.
Currently, the following computer vision tasks are implemented:

| Pipeline                                                                                                                                          | Recommended Model Checkpoints                                                                                                                                                                           |                              Spaces (Interactive Apps)                               | Predicted Modalities                                                                                                                                                               |
|---------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [MarigoldDepthPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_depth.py)           | [prs-eth/marigold-depth-v1-1](https://huggingface.co/prs-eth/marigold-depth-v1-1)                                                                                                                       |          [Depth Estimation](https://huggingface.co/spaces/prs-eth/marigold)          | [Depth](https://en.wikipedia.org/wiki/Depth_map), [Disparity](https://en.wikipedia.org/wiki/Binocular_disparity)                                                                   |
| [MarigoldNormalsPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_normals.py)       | [prs-eth/marigold-normals-v1-1](https://huggingface.co/prs-eth/marigold-normals-v1-1)                                                                                                                   | [Surface Normals Estimation](https://huggingface.co/spaces/prs-eth/marigold-normals) | [Surface normals](https://en.wikipedia.org/wiki/Normal_mapping)                                                                                                                    |
| [MarigoldIntrinsicsPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/marigold/pipeline_marigold_intrinsics.py) | [prs-eth/marigold-iid-appearance-v1-1](https://huggingface.co/prs-eth/marigold-iid-appearance-v1-1),<br>[prs-eth/marigold-iid-lighting-v1-1](https://huggingface.co/prs-eth/marigold-iid-lighting-v1-1) | [Intrinsic Image Decomposition](https://huggingface.co/spaces/prs-eth/marigold-iid)  | [Albedo](https://en.wikipedia.org/wiki/Albedo), [Materials](https://www.n.aiq3d.com/wiki/roughnessmetalnessao-map), [Lighting](https://en.wikipedia.org/wiki/Diffuse_reflection)   |

All original checkpoints are available under the [PRS-ETH](https://huggingface.co/prs-eth/) organization on Hugging Face.
They are designed for use with diffusers pipelines and the [original codebase](https://github.com/prs-eth/marigold), which can also be used to train 
new model checkpoints. 
The following is a summary of the recommended checkpoints, all of which produce reliable results with 1 to 4 steps. 

| Checkpoint                                                                                          | Modality     | Comment                                                                                                                                                           |
|-----------------------------------------------------------------------------------------------------|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [prs-eth/marigold-depth-v1-1](https://huggingface.co/prs-eth/marigold-depth-v1-1)                   | Depth        | Affine-invariant depth prediction assigns each pixel a value between 0 (near plane) and 1 (far plane), with both planes determined by the model during inference. |
| [prs-eth/marigold-normals-v0-1](https://huggingface.co/prs-eth/marigold-normals-v0-1)               | Normals      | The surface normals predictions are unit-length 3D vectors in the screen space camera, with values in the range from -1 to 1.                                     |
| [prs-eth/marigold-iid-appearance-v1-1](https://huggingface.co/prs-eth/marigold-iid-appearance-v1-1) | Intrinsics   | InteriorVerse decomposition is comprised of Albedo and two BRDF material properties: Roughness and Metallicity.                                                   | 
| [prs-eth/marigold-iid-lighting-v1-1](https://huggingface.co/prs-eth/marigold-iid-lighting-v1-1)     | Intrinsics   | HyperSim decomposition of an image \\(I\\) is comprised of Albedo \\(A\\), Diffuse shading \\(S\\), and Non-diffuse residual \\(R\\): \\(I = A*S+R\\).            | 

The examples below are mostly given for depth prediction, but they can be universally applied to other supported 
modalities.
We showcase the predictions using the same input image of Albert Einstein generated by Midjourney.
This makes it easier to compare visualizations of the predictions across various modalities and checkpoints.

<div class="flex gap-4" style="justify-content: center; width: 100%;">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://marigoldmonodepth.github.io/images/einstein.jpg"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Example input image for all Marigold pipelines
    </figcaption>
  </div>
</div>

## Depth Prediction

To get a depth prediction, load the `prs-eth/marigold-depth-v1-1` checkpoint into [MarigoldDepthPipeline](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.MarigoldDepthPipeline), 
put the image through the pipeline, and save the predictions:

```python
import diffusers
import torch

pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
    "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
).to("cuda")

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

depth = pipe(image)

vis = pipe.image_processor.visualize_depth(depth.prediction)
vis[0].save("einstein_depth.png")

depth_16bit = pipe.image_processor.export_depth_to_16bit_png(depth.prediction)
depth_16bit[0].save("einstein_depth_16bit.png")
```

The [visualize_depth()](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_depth) function applies one of 
[matplotlib's colormaps](https://matplotlib.org/stable/users/explain/colors/colormaps.html) (`Spectral` by default) to map the predicted pixel values from a single-channel `[0, 1]` 
depth range into an RGB image.
With the `Spectral` colormap, pixels with near depth are painted red, and far pixels are blue.
The 16-bit PNG file stores the single channel values mapped linearly from the `[0, 1]` range into `[0, 65535]`.
Below are the raw and the visualized predictions. The darker and closer areas (mustache) are easier to distinguish in 
the visualization.

<div class="flex gap-4">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_depth_16bit.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Predicted depth (16-bit PNG)
    </figcaption>
  </div>
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_depth.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Predicted depth visualization (Spectral)
    </figcaption>
  </div>
</div>

## Surface Normals Estimation

Load the `prs-eth/marigold-normals-v1-1` checkpoint into [MarigoldNormalsPipeline](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.MarigoldNormalsPipeline), put the image through the 
pipeline, and save the predictions:

```python
import diffusers
import torch

pipe = diffusers.MarigoldNormalsPipeline.from_pretrained(
    "prs-eth/marigold-normals-v1-1", variant="fp16", torch_dtype=torch.float16
).to("cuda")

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

normals = pipe(image)

vis = pipe.image_processor.visualize_normals(normals.prediction)
vis[0].save("einstein_normals.png")
```

The [visualize_normals()](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_normals) maps the three-dimensional 
prediction with pixel values in the range `[-1, 1]` into an RGB image.
The visualization function supports flipping surface normals axes to make the visualization compatible with other 
choices of the frame of reference.
Conceptually, each pixel is painted according to the surface normal vector in the frame of reference, where `X` axis 
points right, `Y` axis points up, and `Z` axis points at the viewer.
Below is the visualized prediction:

<div class="flex gap-4" style="justify-content: center; width: 100%;">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_normals.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Predicted surface normals visualization
    </figcaption>
  </div>
</div>

In this example, the nose tip almost certainly has a point on the surface, in which the surface normal vector points 
straight at the viewer, meaning that its coordinates are `[0, 0, 1]`.
This vector maps to the RGB `[128, 128, 255]`, which corresponds to the violet-blue color.
Similarly, a surface normal on the cheek in the right part of the image has a large `X` component, which increases the 
red hue.
Points on the shoulders pointing up with a large `Y` promote green color.

## Intrinsic Image Decomposition

Marigold provides two models for Intrinsic Image Decomposition (IID): "Appearance" and "Lighting". 
Each model produces Albedo maps, derived from InteriorVerse and Hypersim annotations, respectively.

- The "Appearance" model also estimates Material properties: Roughness and Metallicity.
- The "Lighting" model generates Diffuse Shading and Non-diffuse Residual.

Here is the sample code saving predictions made by the "Appearance" model:

```python
import diffusers
import torch

pipe = diffusers.MarigoldIntrinsicsPipeline.from_pretrained(
    "prs-eth/marigold-iid-appearance-v1-1", variant="fp16", torch_dtype=torch.float16
).to("cuda")

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

intrinsics = pipe(image)

vis = pipe.image_processor.visualize_intrinsics(intrinsics.prediction, pipe.target_properties)
vis[0]["albedo"].save("einstein_albedo.png")
vis[0]["roughness"].save("einstein_roughness.png")
vis[0]["metallicity"].save("einstein_metallicity.png")
```

Another example demonstrating the predictions made by the "Lighting" model:

```python
import diffusers
import torch

pipe = diffusers.MarigoldIntrinsicsPipeline.from_pretrained(
    "prs-eth/marigold-iid-lighting-v1-1", variant="fp16", torch_dtype=torch.float16
).to("cuda")

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

intrinsics = pipe(image)

vis = pipe.image_processor.visualize_intrinsics(intrinsics.prediction, pipe.target_properties)
vis[0]["albedo"].save("einstein_albedo.png")
vis[0]["shading"].save("einstein_shading.png")
vis[0]["residual"].save("einstein_residual.png")
```

Both models share the same pipeline while supporting different decomposition types.
The exact decomposition parameterization (e.g., sRGB vs. linear space) is stored in the 
`pipe.target_properties` dictionary, which is passed into the 
[visualize_intrinsics()](/docs/diffusers/main/en/api/pipelines/marigold#diffusers.pipelines.marigold.MarigoldImageProcessor.visualize_intrinsics) function.

Below are some examples showcasing the predicted decomposition outputs. 
All modalities can be inspected in the 
[Intrinsic Image Decomposition](https://huggingface.co/spaces/prs-eth/marigold-iid) Space.

<div class="flex gap-4">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/8c7986eaaab5eb9604eb88336311f46a7b0ff5ab/marigold/marigold_einstein_albedo.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Predicted albedo ("Appearance" model)
    </figcaption>
  </div>
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/8c7986eaaab5eb9604eb88336311f46a7b0ff5ab/marigold/marigold_einstein_diffuse.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Predicted diffuse shading ("Lighting" model)
    </figcaption>
  </div>
</div>

## Speeding up inference

The above quick start snippets are already optimized for quality and speed, loading the checkpoint, utilizing the 
`fp16` variant of weights and computation, and performing the default number (4) of denoising diffusion steps.
The first step to accelerate inference, at the expense of prediction quality, is to reduce the denoising diffusion 
steps to the minimum:

```diff
  import diffusers
  import torch

  pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
      "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
  ).to("cuda")

  image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")
  
- depth = pipe(image)
+ depth = pipe(image, num_inference_steps=1)
```

With this change, the `pipe` call completes in 280ms on RTX 3090 GPU.
Internally, the input image is first encoded using the Stable Diffusion VAE encoder, followed by a single denoising 
step performed by the U-Net. 
Finally, the prediction latent is decoded with the VAE decoder into pixel space.
In this setup, two out of three module calls are dedicated to converting between the pixel and latent spaces of the LDM.
Since Marigold's latent space is compatible with Stable Diffusion 2.0, inference can be accelerated by more than 3x, 
reducing the call time to 85ms on an RTX 3090, by using a [lightweight replacement of the SD VAE](../api/models/autoencoder_tiny). 
Note that using a lightweight VAE may slightly reduce the visual quality of the predictions.

```diff
  import diffusers
  import torch

  pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
      "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
  ).to("cuda")

+ pipe.vae = diffusers.AutoencoderTiny.from_pretrained(
+     "madebyollin/taesd", torch_dtype=torch.float16
+ ).cuda()

  image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

  depth = pipe(image, num_inference_steps=1)
```

So far, we have optimized the number of diffusion steps and model components. Self-attention operations account for a 
significant portion of computations. 
Speeding them up can be achieved by using a more efficient attention processor:

```diff
  import diffusers
  import torch
+ from diffusers.models.attention_processor import AttnProcessor2_0

  pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
      "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
  ).to("cuda")

+ pipe.vae.set_attn_processor(AttnProcessor2_0()) 
+ pipe.unet.set_attn_processor(AttnProcessor2_0())

  image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

  depth = pipe(image, num_inference_steps=1)
```

Finally, as suggested in [Optimizations](../optimization/fp16#torchcompile), enabling `torch.compile` can further enhance performance depending on 
the target hardware.
However, compilation incurs a significant overhead during the first pipeline invocation, making it beneficial only when 
the same pipeline instance is called repeatedly, such as within a loop.

```diff
  import diffusers
  import torch
  from diffusers.models.attention_processor import AttnProcessor2_0

  pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
      "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
  ).to("cuda")

  pipe.vae.set_attn_processor(AttnProcessor2_0()) 
  pipe.unet.set_attn_processor(AttnProcessor2_0())

+ pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True)
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)

  image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

  depth = pipe(image, num_inference_steps=1)
```

## Maximizing Precision and Ensembling

Marigold pipelines have a built-in ensembling mechanism combining multiple predictions from different random latents.
This is a brute-force way of improving the precision of predictions, capitalizing on the generative nature of diffusion.
The ensembling path is activated automatically when the `ensemble_size` argument is set greater or equal than `3`.
When aiming for maximum precision, it makes sense to adjust `num_inference_steps` simultaneously with `ensemble_size`.
The recommended values vary across checkpoints but primarily depend on the scheduler type.
The effect of ensembling is particularly well-seen with surface normals:

```diff
  import diffusers

  pipe = diffusers.MarigoldNormalsPipeline.from_pretrained("prs-eth/marigold-normals-v1-1").to("cuda")

  image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

- depth = pipe(image)
+ depth = pipe(image, num_inference_steps=10, ensemble_size=5)

  vis = pipe.image_processor.visualize_normals(depth.prediction)
  vis[0].save("einstein_normals.png")
```

<div class="flex gap-4">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_lcm_normals.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Surface normals, no ensembling
    </figcaption>
  </div>
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_normals.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Surface normals, with ensembling
    </figcaption>
  </div>
</div>

As can be seen, all areas with fine-grained structurers, such as hair, got more conservative and on average more 
correct predictions.
Such a result is more suitable for precision-sensitive downstream tasks, such as 3D reconstruction.

## Frame-by-frame Video Processing with Temporal Consistency

Due to Marigold's generative nature, each prediction is unique and defined by the random noise sampled for the latent 
initialization.
This becomes an obvious drawback compared to traditional end-to-end dense regression networks, as exemplified in the 
following videos:

<div class="flex gap-4">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama.gif"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">Input video</figcaption>
  </div>
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_independent.gif"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth applied to input video frames independently</figcaption>
  </div>
</div>

To address this issue, it is possible to pass `latents` argument to the pipelines, which defines the starting point of 
diffusion.
Empirically, we found that a convex combination of the very same starting point noise latent and the latent 
corresponding to the previous frame prediction give sufficiently smooth results, as implemented in the snippet below:

```python
import imageio
import diffusers
import torch
from diffusers.models.attention_processor import AttnProcessor2_0
from PIL import Image
from tqdm import tqdm

device = "cuda"
path_in = "https://huggingface.co/spaces/prs-eth/marigold-lcm/resolve/c7adb5427947d2680944f898cd91d386bf0d4924/files/video/obama.mp4"
path_out = "obama_depth.gif"

pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
    "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
).to(device)
pipe.vae = diffusers.AutoencoderTiny.from_pretrained(
    "madebyollin/taesd", torch_dtype=torch.float16
).to(device)
pipe.unet.set_attn_processor(AttnProcessor2_0())
pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True)
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
pipe.set_progress_bar_config(disable=True)

with imageio.get_reader(path_in) as reader:
    size = reader.get_meta_data()['size']
    last_frame_latent = None
    latent_common = torch.randn(
        (1, 4, 768 * size[1] // (8 * max(size)), 768 * size[0] // (8 * max(size)))
    ).to(device=device, dtype=torch.float16)

    out = []
    for frame_id, frame in tqdm(enumerate(reader), desc="Processing Video"):
        frame = Image.fromarray(frame)
        latents = latent_common
        if last_frame_latent is not None:
            latents = 0.9 * latents + 0.1 * last_frame_latent

        depth = pipe(
            frame,
            num_inference_steps=1,
            match_input_resolution=False, 
            latents=latents, 
            output_latent=True,
        )
        last_frame_latent = depth.latent
        out.append(pipe.image_processor.visualize_depth(depth.prediction)[0])

    diffusers.utils.export_to_gif(out, path_out, fps=reader.get_meta_data()['fps'])
```

Here, the diffusion process starts from the given computed latent.
The pipeline sets `output_latent=True` to access `out.latent` and computes its contribution to the next frame's latent 
initialization.
The result is much more stable now:

<div class="flex gap-4">
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_independent.gif"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth applied to input video frames independently</figcaption>
  </div>
  <div style="flex: 1 1 50%; max-width: 50%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_obama_depth_consistent.gif"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">Marigold Depth with forced latents initialization</figcaption>
  </div>
</div>

## Marigold for ControlNet

A very common application for depth prediction with diffusion models comes in conjunction with ControlNet.
Depth crispness plays a crucial role in obtaining high-quality results from ControlNet.
As seen in comparisons with other methods above, Marigold excels at that task.
The snippet below demonstrates how to load an image, compute depth, and pass it into ControlNet in a compatible format:

```python
import torch
import diffusers

device = "cuda"
generator = torch.Generator(device=device).manual_seed(2024)
image = diffusers.utils.load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_depth_source.png"
)

pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
    "prs-eth/marigold-depth-v1-1", torch_dtype=torch.float16, variant="fp16"
).to(device)

depth_image = pipe(image, generator=generator).prediction
depth_image = pipe.image_processor.visualize_depth(depth_image, color_map="binary")
depth_image[0].save("motorcycle_controlnet_depth.png")

controlnet = diffusers.ControlNetModel.from_pretrained(
    "diffusers/controlnet-depth-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
).to(device)
pipe = diffusers.StableDiffusionXLControlNetPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16", controlnet=controlnet
).to(device)
pipe.scheduler = diffusers.DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)

controlnet_out = pipe(
    prompt="high quality photo of a sports bike, city",
    negative_prompt="",
    guidance_scale=6.5,
    num_inference_steps=25,
    image=depth_image,
    controlnet_conditioning_scale=0.7,
    control_guidance_end=0.7,
    generator=generator,
).images
controlnet_out[0].save("motorcycle_controlnet_out.png")
```

<div class="flex gap-4">
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_depth_source.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Input image
    </figcaption>
  </div>
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/motorcycle_controlnet_depth.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Depth in the format compatible with ControlNet
    </figcaption>
  </div>
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/motorcycle_controlnet_out.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      ControlNet generation, conditioned on depth and prompt: "high quality photo of a sports bike, city"
    </figcaption>
  </div>
</div>

## Quantitative Evaluation

To evaluate Marigold quantitatively in standard leaderboards and benchmarks (such as NYU, KITTI, and other datasets), 
follow the evaluation protocol outlined in the paper: load the full precision fp32 model and use appropriate values 
for `num_inference_steps` and `ensemble_size`.
Optionally seed randomness to ensure reproducibility. 
Maximizing `batch_size` will deliver maximum device utilization.

```python
import diffusers
import torch

device = "cuda"
seed = 2024

generator = torch.Generator(device=device).manual_seed(seed)
pipe = diffusers.MarigoldDepthPipeline.from_pretrained("prs-eth/marigold-depth-v1-1").to(device)

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

depth = pipe(
    image, 
    num_inference_steps=4,  # set according to the evaluation protocol from the paper
    ensemble_size=10,       # set according to the evaluation protocol from the paper
    generator=generator,
)

# evaluate metrics
```

## Using Predictive Uncertainty

The ensembling mechanism built into Marigold pipelines combines multiple predictions obtained from different random 
latents.
As a side effect, it can be used to quantify epistemic (model) uncertainty; simply specify `ensemble_size` greater 
or equal than 3 and set `output_uncertainty=True`.
The resulting uncertainty will be available in the `uncertainty` field of the output.
It can be visualized as follows:

```python
import diffusers
import torch

pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
    "prs-eth/marigold-depth-v1-1", variant="fp16", torch_dtype=torch.float16
).to("cuda")

image = diffusers.utils.load_image("https://marigoldmonodepth.github.io/images/einstein.jpg")

depth = pipe(
	image,
	ensemble_size=10,  # any number >= 3
	output_uncertainty=True,
)

uncertainty = pipe.image_processor.visualize_uncertainty(depth.uncertainty)
uncertainty[0].save("einstein_depth_uncertainty.png")
```

<div class="flex gap-4">
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_depth_uncertainty.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Depth uncertainty
    </figcaption>
  </div>
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/marigold/marigold_einstein_normals_uncertainty.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Surface normals uncertainty
    </figcaption>
  </div>
  <div style="flex: 1 1 33%; max-width: 33%;">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/4f83035d84a24e5ec44fdda129b1d51eba12ce04/marigold/marigold_einstein_albedo_uncertainty.png"/>
    <figcaption class="mt-1 text-center text-sm text-gray-500">
      Albedo uncertainty
    </figcaption>
  </div>
</div>

The interpretation of uncertainty is easy: higher values (white) correspond to pixels, where the model struggles to 
make consistent predictions.
- The depth model exhibits the most uncertainty around discontinuities, where object depth changes abruptly.
- The surface normals model is least confident in fine-grained structures like hair and in dark regions such as the 
collar area.
- Albedo uncertainty is represented as an RGB image, as it captures uncertainty independently for each color channel, 
unlike depth and surface normals. It is also higher in shaded regions and at discontinuities.

## Conclusion

We hope Marigold proves valuable for your downstream tasks, whether as part of a broader generative workflow or for 
perception-based applications like 3D reconstruction.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/marigold_usage.md" />

### Model formats
https://huggingface.co/docs/diffusers/main/using-diffusers/other-formats.md

# Model formats

Diffusion models are typically stored in the Diffusers format or single-file format. Model files can be stored in various file types such as safetensors, dduf, or ckpt.

> [!TIP]
> Format refers to whether the weights are stored in a directory structure and file refers to the file type.

This guide will show you how to load pipelines and models from these formats and files.

## Diffusers format

The Diffusers format stores each model (UNet, transformer, text encoder) in a separate subfolder. There are several benefits to storing models separately.

- Faster overall pipeline initialization because you can load the individual model you need or load them all in parallel.
- Reduced memory usage because you don't need to load all the pipeline components if you only need one model. [Reuse](./loading#reusing-models-in-multiple-pipelines) a model that is shared between multiple pipelines.
- Lower storage requirements because common models shared between multiple pipelines are only downloaded once.
- Flexibility to use new or improved models in a pipeline.

## Single file format

A single-file format stores *all* the model (UNet, transformer, text encoder) weights in a single file. Benefits of single-file formats include the following.

- Greater compatibility with [ComfyUI](https://github.com/comfyanonymous/ComfyUI) or [Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui).
- Easier to download and share a single file.

Use [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) to load a single file.

```py
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors",
    torch_dtype=torch.float16,
    device_map="cuda"
)
```

The [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method also supports passing new models or schedulers.

```py
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel

transformer = FluxTransformer2DModel.from_single_file(
    "https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=torch.bfloat16
)
pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)
```

### Configuration options

Diffusers format models have a `config.json` file in their repositories with important attributes such as the number of layers and attention heads. The [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method automatically determines the appropriate config to use from `config.json`. This may fail in a few rare instances though, in which case, you should use the `config` argument.

You should also use the `config` argument if the models in a pipeline are different from the original implementation or if it doesn't have the necessary metadata to determine the correct config.

```py
from diffusers import StableDiffusionXLPipeline

ckpt_path = "https://huggingface.co/segmind/SSD-1B/blob/main/SSD-1B.safetensors"

pipeline = StableDiffusionXLPipeline.from_single_file(ckpt_path, config="segmind/SSD-1B")
```

Diffusers attempts to infer the pipeline components based on the signature types of the pipeline class when using `original_config` with `local_files_only=True`. It won't download the config files from a Hub repository to avoid backward breaking changes when you can't connect to the internet. This method isn't as reliable as providing a path to a local model with the `config` argument and may lead to errors. You should run the pipeline with `local_files_only=False` to download the config files to the local cache to avoid errors.

Override default configs by passing the arguments directly to [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file). The examples below demonstrate how to override the configs in a pipeline or model.

```py
from diffusers import StableDiffusionXLInstructPix2PixPipeline

ckpt_path = "https://huggingface.co/stabilityai/cosxl/blob/main/cosxl_edit.safetensors"
pipeline = StableDiffusionXLInstructPix2PixPipeline.from_single_file(
    ckpt_path, config="diffusers/sdxl-instructpix2pix-768", is_cosxl_edit=True
)
```

```py
from diffusers import UNet2DConditionModel

ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors"
model = UNet2DConditionModel.from_single_file(ckpt_path, upcast_attention=True)
```

### Local files

The [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method attempts to configure a pipeline or model by inferring the model type from the keys in the checkpoint file. For example, any single file checkpoint based on the Stable Diffusion XL base model is configured from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).

If you're working with local files, download the config files with the [snapshot_download](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) method and the model checkpoint with [hf_hub_download](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.hf_hub_download). These files are downloaded to your [cache directory](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache), but you can download them to a specific directory with the `local_dir` argument.

```py
from huggingface_hub import hf_hub_download, snapshot_download
from diffusers import StableDiffusionXLPipeline

my_local_checkpoint_path = hf_hub_download(
    repo_id="segmind/SSD-1B",
    filename="SSD-1B.safetensors"
)

my_local_config_path = snapshot_download(
    repo_id="segmind/SSD-1B",
    allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"]
)

pipeline = StableDiffusionXLPipeline.from_single_file(
    my_local_checkpoint_path, config=my_local_config_path, local_files_only=True
)
```

### Symlink

If you're working with a file system that does not support symlinking, download the checkpoint file to a local directory first with the `local_dir` parameter. Using the `local_dir` parameter automatically disables symlinks.

```py
from huggingface_hub import hf_hub_download, snapshot_download
from diffusers import StableDiffusionXLPipeline

my_local_checkpoint_path = hf_hub_download(
    repo_id="segmind/SSD-1B",
    filename="SSD-1B.safetensors"
    local_dir="my_local_checkpoints",
)
print("My local checkpoint: ", my_local_checkpoint_path)

my_local_config_path = snapshot_download(
    repo_id="segmind/SSD-1B",
    allow_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"]
)
print("My local config: ", my_local_config_path)
```

Pass these paths to [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file).

```py
pipeline = StableDiffusionXLPipeline.from_single_file(
    my_local_checkpoint_path, config=my_local_config_path, local_files_only=True
)
```

## File types

Models can be stored in several file types. Safetensors is the most common file type but you may encounter other file types on the Hub or diffusion community.

### safetensors

[Safetensors](https://hf.co/docs/safetensors) is a safe and fast file type for securely storing and loading tensors. It restricts the header size to limit certain types of attacks, supports lazy loading (useful for distributed setups), and generally loads faster.

Diffusers loads safetensors file by default (a required dependency) if they are available and the Safetensors library is installed.

Use [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) or [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) to load safetensor files.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch.dtype=torch.float16,
    device_map="cuda"
)

pipeline = DiffusionPipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors",
    torch_dtype=torch.float16,
)
```

If you're using a checkpoint trained with a Diffusers training script, metadata such as the LoRA configuration, is automatically saved. When the file is loaded, the metadata is parsed to correctly configure the LoRA and avoid missing or incorrect LoRA configs. Inspect the metadata of a safetensors file by clicking on the <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/safetensors/logo.png" alt="safetensors logo" style="vertical-align: middle; display: inline-block; max-height: 0.8em; max-width: 0.8em; margin: 0; padding: 0; line-height: 1;"> logo next to the file on the Hub.

Save the metadata for LoRAs that aren't trained with Diffusers with either `transformer_lora_adapter_metadata` or `unet_lora_adapter_metadata` depending on your model. For the text encoder, use the `text_encoder_lora_adapter_metadata` and `text_encoder_2_lora_adapter_metadata` arguments in [save_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.FluxLoraLoaderMixin.save_lora_weights). This is only supported for safetensors files.

```py
import torch
from diffusers import FluxPipeline

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
).to("cuda")
pipeline.load_lora_weights("linoyts/yarn_art_Flux_LoRA")
pipeline.save_lora_weights(
    text_encoder_lora_adapter_metadata={"r": 8, "lora_alpha": 8},
    text_encoder_2_lora_adapter_metadata={"r": 8, "lora_alpha": 8}
)
```

### ckpt

Older model weights are commonly saved with Python's [pickle](https://docs.python.org/3/library/pickle.html) utility in a ckpt file.

Pickled files may be unsafe because they can be exploited to execute malicious code. It is recommended to use safetensors files or convert the weights to safetensors files.

Use [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) to load a ckpt file.

```py
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_single_file(
    "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt"
)
```

### dduf

> [!TIP]
> DDUF is an experimental file type and the API may change. Refer to the DDUF [docs](https://huggingface.co/docs/hub/dduf) to learn more.

DDUF is a file type designed to unify different diffusion model distribution methods and weight-saving formats. It is a standardized and flexible method to package all components of a diffusion model into a single file, providing a balance between the Diffusers and single-file formats.

Use the `dduf_file` argument in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to load a DDUF file. You can also load quantized dduf files as long as they are stored in the Diffusers format.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "DDUF/FLUX.1-dev-DDUF",
    dduf_file="FLUX.1-dev.dduf",
    torch_dtype=torch.bfloat16,
    device_map="cuda"
)
```

To save a pipeline as a dduf file, use the [export_folder_as_dduf](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/serialization#huggingface_hub.export_folder_as_dduf) utility.

```py
import torch
from diffusers import DiffusionPipeline
from huggingface_hub import export_folder_as_dduf

pipeline = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)

save_folder = "flux-dev"
pipeline.save_pretrained("flux-dev")
export_folder_as_dduf("flux-dev.dduf", folder_path=save_folder)
```

## Converting formats and files

Diffusers provides scripts and methods to convert format and files to enable broader support across the diffusion ecosystem.

Take a look at the [diffusers/scripts](https://github.com/huggingface/diffusers/tree/main/scripts) folder to find a conversion script. Scripts with `"to_diffusers` appended at the end converts a model to the Diffusers format. Each script has a specific set of arguments for configuring the conversion. Make sure you check what arguments are available.

The example below converts a model stored in Diffusers format to a single-file format. Provide the path to the model to convert and where to save the converted model. You can optionally specify what file type and data type to save the model as.

```bash
python convert_diffusers_to_original_sdxl.py --model_path path/to/model/to/convert --checkpoint_path path/to/save/model/to --use_safetensors
```

The [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained) method also saves a model in Diffusers format and takes care of creating subfolders for each model. It saves the files as safetensor files by default.

```py
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors",
)
pipeline.save_pretrained()
```

Finally, you can use a Space like [SD To Diffusers](https://hf.co/spaces/diffusers/sd-to-diffusers) or [SD-XL To Diffusers](https://hf.co/spaces/diffusers/sdxl-to-diffusers) to convert models to the Diffusers format. It'll open a PR on your model repository with the converted files. This is the easiest way to convert a model, but it may fail for more complicated models. Using a conversion script is more reliable.

## Resources

- Learn more about the design decisions and why safetensor files are preferred for saving and loading model weights in the [Safetensors audited as really safe and becoming the default](https://blog.eleuther.ai/safetensors-security-audit/) blog post.



<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/other-formats.md" />

### Sharing pipelines and models
https://huggingface.co/docs/diffusers/main/using-diffusers/push_to_hub.md

# Sharing pipelines and models

Share your pipeline or models and schedulers on the Hub with the [PushToHubMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin) class. This class:

1. creates a repository on the Hub
2. saves your model, scheduler, or pipeline files so they can be reloaded later
3. uploads folder containing these files to the Hub

This guide will show you how to upload your files to the Hub with the [PushToHubMixin](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin) class.

Log in to your Hugging Face account with your access [token](https://huggingface.co/settings/tokens).

<hfoptions id="login">
<hfoption id="notebook">

```py
from huggingface_hub import notebook_login

notebook_login()
```

</hfoption>
<hfoption id="hf CLI">

```bash
hf auth login
```

</hfoption>
</hfoptions>

## Models

To push a model to the Hub, call [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) and specify the repository id of the model.

```py
from diffusers import ControlNetModel

controlnet = ControlNetModel(
    block_out_channels=(32, 64),
    layers_per_block=2,
    in_channels=4,
    down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
    cross_attention_dim=32,
    conditioning_embedding_out_channels=(16, 32),
)
controlnet.push_to_hub("my-controlnet-model")
```

The [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method saves the model's `config.json` file and the weights are automatically saved as safetensors files.

Load the model again with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model")
```

## Scheduler

To push a scheduler to the Hub, call [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) and specify the repository id of the scheduler.

```py
from diffusers import DDIMScheduler

scheduler = DDIMScheduler(
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
)
scheduler.push_to_hub("my-controlnet-scheduler")
```

The [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) function saves the scheduler's `scheduler_config.json` file to the specified repository.

Load the scheduler again with [from_pretrained()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.SchedulerMixin.from_pretrained).

```py
scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler")
```

## Pipeline

To push a pipeline to the Hub, initialize the pipeline components with your desired parameters.

```py
from diffusers import (
    UNet2DConditionModel,
    AutoencoderKL,
    DDIMScheduler,
    StableDiffusionPipeline,
)
from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer

unet = UNet2DConditionModel(
    block_out_channels=(32, 64),
    layers_per_block=2,
    sample_size=32,
    in_channels=4,
    out_channels=4,
    down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
    up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
    cross_attention_dim=32,
)

scheduler = DDIMScheduler(
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
)

vae = AutoencoderKL(
    block_out_channels=[32, 64],
    in_channels=3,
    out_channels=3,
    down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
    up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
    latent_channels=4,
)

text_encoder_config = CLIPTextConfig(
    bos_token_id=0,
    eos_token_id=2,
    hidden_size=32,
    intermediate_size=37,
    layer_norm_eps=1e-05,
    num_attention_heads=4,
    num_hidden_layers=5,
    pad_token_id=1,
    vocab_size=1000,
)
text_encoder = CLIPTextModel(text_encoder_config)
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
```

Pass all components to the pipeline and call [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub).

```py
components = {
    "unet": unet,
    "scheduler": scheduler,
    "vae": vae,
    "text_encoder": text_encoder,
    "tokenizer": tokenizer,
    "safety_checker": None,
    "feature_extractor": None,
}

pipeline = StableDiffusionPipeline(**components)
pipeline.push_to_hub("my-pipeline")
```

The [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) method saves each component to a subfolder in the repository. Load the pipeline again with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline")
```

## Privacy

Set `private=True` in [push_to_hub()](/docs/diffusers/main/en/api/schedulers/overview#diffusers.utils.PushToHubMixin.push_to_hub) to keep a model, scheduler, or pipeline files private.

```py
controlnet.push_to_hub("my-controlnet-model-private", private=True)
```

Private repositories are only visible to you. Other users won't be able to clone the repository and it won't appear in search results. Even if a user has the URL to your private repository, they'll receive a `404 - Sorry, we can't find the page you are looking for`. You must be [logged in](https://huggingface.co/docs/huggingface_hub/quick-start#login) to load a model from a private repository.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/push_to_hub.md" />

### Text-to-image
https://huggingface.co/docs/diffusers/main/using-diffusers/conditional_image_generation.md

# Text-to-image


When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k") which is also known as a *prompt*.

From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The *denoising* process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image.

> [!TIP]
> Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog post to learn more about how a latent diffusion model works.

You can generate images from a prompt in 🤗 Diffusers in two steps:

1. Load a checkpoint into the [AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image) class, which automatically detects the appropriate pipeline class to use based on the checkpoint:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
```

2. Pass a prompt to the pipeline to generate an image:

```py
image = pipeline(
	"stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
).images[0]
image
```

<div class="flex justify-center">
	<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-vader.png"/>
</div>

## Popular models

The most common text-to-image models are [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5), [Stable Diffusion XL (SDXL)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder). There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let's use the same prompt for each model and compare their results.

### Stable Diffusion v1.5

[Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) is a latent diffusion model initialized from [Stable Diffusion v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
generator = torch.Generator("cuda").manual_seed(31)
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0]
image
```

### Stable Diffusion XL

SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional *micro-conditionings* to generate high-quality images centered subjects. Take a look at the more comprehensive [SDXL](sdxl) guide to learn more about how to use it. In general, you can use SDXL like:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
generator = torch.Generator("cuda").manual_seed(31)
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0]
image
```

### Kandinsky 2.2

The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model.

The easiest way to use Kandinsky 2.2 is:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
).to("cuda")
generator = torch.Generator("cuda").manual_seed(31)
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0]
image
```

### ControlNet

ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5). Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth [ControlNet](controlnet) guide to learn more about other conditioning inputs and how to use them.

In this example, let's condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations:

```py
from diffusers import ControlNetModel, AutoPipelineForText2Image
from diffusers.utils import load_image
import torch

controlnet = ControlNetModel.from_pretrained(
	"lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png")
```

Pass the `controlnet` to the [AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image), and provide the prompt and pose estimation image:

```py
pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16"
).to("cuda")
generator = torch.Generator("cuda").manual_seed(31)
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0]
image
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-1.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion v1.5</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion XL</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-2.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">Kandinsky 2.2</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-3.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">ControlNet (pose conditioning)</figcaption>
  </div>
</div>

## Configure pipeline parameters

There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image's output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters.

### Height and width

The `height` and `width` parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
image = pipeline(
	"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512
).images[0]
image
```

<div class="flex justify-center">
	<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-hw.png"/>
</div>

> [!WARNING]
> Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL's default image size is 1024x1024 and using lower `height` and `width` values may result in lower quality images. Make sure you check the model's API reference first!

### Guidance scale

The `guidance_scale` parameter affects how much the prompt influences image generation. A lower value gives the model "creativity" to generate images that are more loosely related to the prompt. Higher `guidance_scale` values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
).to("cuda")
image = pipeline(
	"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5
).images[0]
image
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-guidance-scale-2.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 2.5</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-guidance-scale-7.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 7.5</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-guidance-scale-10.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 10.5</figcaption>
  </div>
</div>

### Negative prompt

Just like how a prompt guides generation, a *negative prompt* steers the model away from things you don't want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as "low resolution" or "bad details". You can also use a negative prompt to remove or modify the content and style of an image.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
).to("cuda")
image = pipeline(
	prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
	negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy",
).images[0]
image
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-neg-prompt-1.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/text2img-neg-prompt-2.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "astronaut"</figcaption>
  </div>
</div>

### Generator

A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html#generator) object enables reproducibility in a pipeline by setting a manual seed. You can use a `Generator` to generate batches of images and iteratively improve on an image generated from a seed as detailed in the [Improve image quality with deterministic generation](reusing_seeds) guide.

You can set a seed and `Generator` as shown below. Creating an image with a `Generator` should return the same result each time instead of randomly generating a new image.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
).to("cuda")
generator = torch.Generator(device="cuda").manual_seed(30)
image = pipeline(
	"Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
	generator=generator,
).images[0]
image
```

## Control image generation

There are several ways to exert more control over how an image is generated outside of configuring a pipeline's parameters, such as prompt weighting and ControlNet models.

### Prompt weighting

Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the [Compel](https://github.com/damian0815/compel) library to help you generate the weighted prompt embeddings.

> [!TIP]
> Learn how to create the prompt embeddings in the [Prompt weighting](weighted_prompts) guide. This example focuses on how to use the prompt embeddings in the pipeline.

Once you've created the embeddings, you can pass them to the `prompt_embeds` (and `negative_prompt_embeds` if you're using a negative prompt) parameter in the pipeline.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
	"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16
).to("cuda")
image = pipeline(
	prompt_embeds=prompt_embeds, # generated from Compel
	negative_prompt_embeds=negative_prompt_embeds, # generated from Compel
).images[0]
```

### ControlNet

As you saw in the [ControlNet](#controlnet) section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it'll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a [MultiControlNet](controlnet#multicontrolnet)!

There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive [ControlNet](controlnet) guide to learn how you can use these models.

## Optimize

Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn't mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory.

PyTorch 2.0 also supports a more memory-efficient attention mechanism called [*scaled dot product attention*](../optimization/fp16#scaled-dot-product-attention) that is automatically enabled if you're using PyTorch 2.0. You can combine this with [`torch.compile`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) to speed your code up even more:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda")
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)
```

For more tips on how to optimize your code to save memory and speed up inference, read the [Accelerate inference](../optimization/fp16) and [Reduce memory usage](../optimization/memory) guides.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/conditional_image_generation.md" />

### Image-to-image
https://huggingface.co/docs/diffusers/main/using-diffusers/img2img.md

# Image-to-image


Image-to-image is similar to [text-to-image](conditional_image_generation), but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image.

With 🤗 Diffusers, this is as easy as 1-2-3:

1. Load a checkpoint into the [AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) class; this pipeline automatically handles loading the correct pipeline class  based on the checkpoint:

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()
```

> [!TIP]
> You'll notice throughout the guide, we use [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) and [enable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention), to save memory and increase inference speed. If you're using PyTorch 2.0, then you don't need to call [enable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_xformers_memory_efficient_attention) on your pipeline because it'll already be using PyTorch 2.0's native [scaled-dot product attention](../optimization/fp16#scaled-dot-product-attention).

2. Load an image to pass to the pipeline:

```py
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
```

3. Pass a prompt and image to the pipeline to generate an image:

```py
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipeline(prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Popular models

The most popular image-to-image models are [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5), [Stable Diffusion XL (SDXL)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder). The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let's take a quick look at how to use each of these models and compare their results.

### Stable Diffusion v1.5

Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you'll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image:

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdv1.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

### Stable Diffusion XL (SDXL)

SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model's output. Read the [SDXL](sdxl) guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images.

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, strength=0.5).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

### Kandinsky 2.2

The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images.

The simplest way to use Kandinsky 2.2 is:

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-kandinsky.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Configure pipeline parameters

There are several important parameters you can configure in the pipeline that'll affect the image generation process and image quality. Let's take a closer look at what these parameters do and how changing them affects the output.

### Strength

`strength` is one of the most important parameters to consider and it'll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words:

- 📈 a higher `strength` value gives the model more "creativity" to generate an image that's different from the initial image; a `strength` value of 1.0 means the initial image is more or less ignored
- 📉 a lower `strength` value means the generated image is more similar to the initial image

The `strength` and `num_inference_steps` parameters are related because `strength` determines the number of noise steps to add. For example, if the `num_inference_steps` is 50 and `strength` is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image.

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, strength=0.8).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-0.4.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.4</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-0.6.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 0.6</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-strength-1.0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">strength = 1.0</figcaption>
  </div>
</div>

### Guidance scale

The `guidance_scale` parameter is used to control how closely aligned the generated image and text prompt are. A higher `guidance_scale` value means your generated image is more aligned with the prompt, while a lower `guidance_scale` value means your generated image has more space to deviate from the prompt.

You can combine `guidance_scale` with `strength` for even more precise control over how expressive the model is. For example, combine a high `strength + guidance_scale` for maximum creativity or use a combination of low `strength` and low `guidance_scale` to generate an image that resembles the initial image but is not as strictly bound to the prompt.

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-0.1.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 0.1</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-3.0.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 5.0</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-guidance-7.5.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">guidance_scale = 10.0</figcaption>
  </div>
</div>

### Negative prompt

A negative prompt conditions the model to *not* include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like "poor details" or "blurry" to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image.

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"

# pass prompt and image to pipeline
image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-negative-1.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-negative-2.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">negative_prompt = "jungle"</figcaption>
  </div>
</div>

## Chained image-to-image pipelines

There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines.

### Text-to-image-to-image

Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let's chain a Stable Diffusion and a Kandinsky model.

Start by generating an image with the text-to-image pipeline:

```py
from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image
import torch
from diffusers.utils import make_image_grid

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0]
text2image
```

Now you can pass this generated image to the image-to-image pipeline:

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0]
make_image_grid([text2image, image2image], rows=1, cols=2)
```

### Image-to-image-to-image

You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image.

Start by generating an image:

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image = pipeline(prompt, image=init_image, output_type="latent").images[0]
```

> [!TIP]
> It is important to specify `output_type="latent"` in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE.

Pass the latent output from this pipeline to the next pipeline to generate an image in a [comic book art style](https://huggingface.co/ogkalu/Comic-Diffusion):

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "ogkalu/Comic-Diffusion", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# need to include the token "charliebo artstyle" in the prompt to use this checkpoint
image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0]
```

Repeat one more time to generate the final image in a [pixel art style](https://huggingface.co/kohbanye/pixel-art-style):

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "kohbanye/pixel-art-style", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# need to include the token "pixelartstyle" in the prompt to use this checkpoint
image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

### Image-to-upscaler-to-super-resolution

Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image.

Start with an image-to-image pipeline:

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import make_image_grid, load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

# pass prompt and image to pipeline
image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0]
```

> [!TIP]
> It is important to specify `output_type="latent"` in the pipeline to keep all the outputs in *latent* space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE.

Chain it to an upscaler pipeline to increase the image resolution:

```py
from diffusers import StableDiffusionLatentUpscalePipeline

upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(
    "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, use_safetensors=True
)
upscaler.enable_model_cpu_offload()
upscaler.enable_xformers_memory_efficient_attention()

image_2 = upscaler(prompt, image=image_1).images[0]
```

Finally, chain it to a super-resolution pipeline to further enhance the resolution:

```py
from diffusers import StableDiffusionUpscalePipeline

super_res = StableDiffusionUpscalePipeline.from_pretrained(
    "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
super_res.enable_model_cpu_offload()
super_res.enable_xformers_memory_efficient_attention()

image_3 = super_res(prompt, image=image_2).images[0]
make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2)
```

## Control image generation

Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the `negative_prompt` to partially control image generation, there are more robust methods like prompt weighting and ControlNets.

### Prompt weighting

Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", you can choose to increase or decrease the embeddings of "astronaut" and "jungle". The [Compel](https://github.com/damian0815/compel) library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the [Prompt weighting](weighted_prompts) guide.

[AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) has a `prompt_embeds` (and `negative_prompt_embeds` if you're using a negative prompt) parameter where you can pass the embeddings which replaces the `prompt` parameter.

```py
from diffusers import AutoPipelineForImage2Image
import torch

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel
    negative_prompt_embeds=negative_prompt_embeds, # generated from Compel
    image=init_image,
).images[0]
```

### ControlNet

ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it.

For example, let's condition an image with a depth map to keep the spatial information in the image.

```py
from diffusers.utils import load_image, make_image_grid

# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)
init_image = init_image.resize((958, 960)) # resize to depth image dimensions
depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png")
make_image_grid([init_image, depth_image], rows=1, cols=2)
```

Load a ControlNet model conditioned on depth maps and the [AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image):

```py
from diffusers import ControlNetModel, AutoPipelineForImage2Image
import torch

controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()
```

Now generate a new image conditioned on the depth map, initial image, and prompt:

```py
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0]
make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3)
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">depth image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-controlnet.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">ControlNet image</figcaption>
  </div>
</div>

Let's apply a new [style](https://huggingface.co/nitrosocke/elden-ring-diffusion) to the image generated from the ControlNet by chaining it with an image-to-image pipeline:

```py
pipeline = AutoPipelineForImage2Image.from_pretrained(
    "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16,
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy"

image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0]
make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2)
```

<div class="flex justify-center">
  <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-elden-ring.png">
</div>

## Optimize

Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0's [scaled-dot product attention](../optimization/fp16#scaled-dot-product-attention) or [xFormers](../optimization/xformers) (you can use one or the other, but there's no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU.

```diff
+ pipeline.enable_model_cpu_offload()
+ pipeline.enable_xformers_memory_efficient_attention()
```

With [`torch.compile`](../optimization/fp16#torchcompile), you can boost your inference speed even more by wrapping your UNet with it:

```py
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)
```

To learn more, take a look at the [Reduce memory usage](../optimization/memory) and [Accelerate inference](../optimization/fp16) guides.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/img2img.md" />

### Reproducibility
https://huggingface.co/docs/diffusers/main/using-diffusers/reusing_seeds.md

# Reproducibility

Diffusion is a random process that generates a different output every time. For certain situations like testing and replicating results, you want to generate the same result each time, across releases and platforms within a certain tolerance range.

This guide will show you how to control sources of randomness and enable deterministic algorithms.

## Generator

Pipelines rely on [torch.randn](https://pytorch.org/docs/stable/generated/torch.randn.html), which uses a different random seed each time, to create the initial noisy tensors. To generate the same output on a CPU or GPU, use a [Generator](https://docs.pytorch.org/docs/stable/generated/torch.Generator.html) to manage how random values are generated.

> [!TIP]
> If reproducibility is important to your use case, we recommend always using a CPU `Generator`. The performance loss is often negligible and you'll generate more similar values.

<hfoptions id="generator">
<hfoption id="GPU">

The GPU uses a different random number generator than the CPU. Diffusers solves this issue with the [randn_tensor()](/docs/diffusers/main/en/api/utilities#diffusers.utils.torch_utils.randn_tensor) function to create the random tensor on a CPU and then moving it to the GPU. This function is used everywhere inside the pipeline and you don't need to explicitly call it.

Use [manual_seed](https://docs.pytorch.org/docs/stable/generated/torch.manual_seed.html) as shown below to set a seed.

```py
import torch
import numpy as np
from diffusers import DDIMPipeline

ddim = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32", device_map="cuda")
generator = torch.manual_seed(0)
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
print(np.abs(image).sum())
```

</hfoption>
<hfoption id="CPU">

Set `device="cpu"` in the `Generator` and use [manual_seed](https://docs.pytorch.org/docs/stable/generated/torch.manual_seed.html) to set a seed for generating random numbers.

```py
import torch
import numpy as np
from diffusers import DDIMPipeline

ddim = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
generator = torch.Generator(device="cpu").manual_seed(0)
image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
print(np.abs(image).sum())
```

</hfoption>
</hfoptions>

The `Generator` object should be passed to the pipeline instead of an integer seed. `Generator` maintains a *random state* that is consumed and modified when used. Once consumed, the same `Generator` object produces different results in subsequent calls, even across different pipelines, because it's *state* has changed.

```py
generator = torch.manual_seed(0)

for _ in range(5):
-    image = pipeline(prompt, generator=generator)
+    image = pipeline(prompt, generator=torch.manual_seed(0))
```

## Deterministic algorithms

PyTorch supports [deterministic algorithms](https://docs.pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms) - where available - for certain operations so they produce the same results. Deterministic algorithms may be slower and decrease performance.

Use Diffusers' [enable_full_determinism](https://github.com/huggingface/diffusers/blob/142f353e1c638ff1d20bd798402b68f72c1ebbdd/src/diffusers/utils/testing_utils.py#L861) function to enable deterministic algorithms.

```py
import torch
from diffusers_utils import enable_full_determinism

enable_full_determinism()
```

Under the hood, `enable_full_determinism` works by:

- Setting the environment variable [CUBLAS_WORKSPACE_CONFIG](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility) to `:16:8` to only use one buffer size during rntime. Non-deterministic behavior occurs when operations are used in more than one CUDA stream.
- Disabling benchmarking to find the fastest convolution operation by setting `torch.backends.cudnn.benchmark=False`. Non-deterministic behavior occurs because the benchmark may select different algorithms each time depending on hardware or benchmarking noise.
- Disabling TensorFloat32 (TF32) operations in favor of more precise and consistent full-precision operations.


## Resources

We strongly recommend reading PyTorch's developer notes about [Reproducibility](https://docs.pytorch.org/docs/stable/notes/randomness.html). You can try to limit randomness, but it is not *guaranteed* even with an identical seed.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/reusing_seeds.md" />

### Stable Diffusion XL Turbo
https://huggingface.co/docs/diffusers/main/using-diffusers/sdxl_turbo.md

# Stable Diffusion XL Turbo


SDXL Turbo is an adversarial time-distilled [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) (SDXL) model capable
of running inference in as little as 1 step.

This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image.

Before you begin, make sure you have the following libraries installed:

```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate
```

## Load model checkpoints

Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method:

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
pipeline = pipeline.to("cuda")
```

You can also use the [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method to load a model checkpoint stored in a single file format (`.ckpt` or `.safetensors`) from the Hub or locally. For this loading method, you need to set `timestep_spacing="trailing"` (feel free to experiment with the other scheduler config values to get better results):

```py
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
import torch

pipeline = StableDiffusionXLPipeline.from_single_file(
    "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors",
    torch_dtype=torch.float16, variant="fp16")
pipeline = pipeline.to("cuda")
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing")
```

## Text-to-image

For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the `height` and `width` parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so.

Make sure to set `guidance_scale` to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images.
Increasing the number of steps to 2, 3 or 4 should improve image quality.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
pipeline_text2image = pipeline_text2image.to("cuda")

prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."

image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sdxl-turbo-text2img.png" alt="generated image of a racoon in a robe"/>
</div>

## Image-to-image

For image-to-image generation, make sure that `num_inference_steps * strength` is larger or equal to 1.
The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, e.g. `0.5 * 2.0 = 1` step in
our example below.

```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid

# use from_pipe to avoid consuming additional memory when loading a checkpoint
pipeline_image2image = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda")

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
init_image = init_image.resize((512, 512))

prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"

image = pipeline_image2image(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sdxl-turbo-img2img.png" alt="Image-to-image generation sample using SDXL Turbo"/>
</div>

## Speed-up SDXL Turbo even more

- Compile the UNet if you are using PyTorch version 2.0 or higher. The first inference run will be very slow, but subsequent ones will be much faster.

```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```

- When using the default VAE, keep it in `float32` to avoid costly `dtype` conversions before and after each generation. You only need to do this one before your first generation:

```py
pipe.upcast_vae()
```

As an alternative, you can also use a [16-bit VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) created by community member [`@madebyollin`](https://huggingface.co/madebyollin) that does not need to be upcasted to `float32`.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/sdxl_turbo.md" />

### Stable Video Diffusion
https://huggingface.co/docs/diffusers/main/using-diffusers/svd.md

# Stable Video Diffusion


[Stable Video Diffusion (SVD)](https://huggingface.co/papers/2311.15127) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image.

This guide will show you how to use SVD to generate short videos from images.

Before you begin, make sure you have the following libraries installed:

```py
# Colab에서 필요한 라이브러리를 설치하기 위해 주석을 제외하세요
!pip install -q -U diffusers transformers accelerate
```

The are two variants of this model, [SVD](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid) and [SVD-XT](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt). The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames.

You'll use the SVD-XT checkpoint for this guide.

```python
import torch

from diffusers import StableVideoDiffusionPipeline
from diffusers.utils import load_image, export_to_video

pipe = StableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16"
)
pipe.enable_model_cpu_offload()

# Load the conditioning image
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png")
image = image.resize((1024, 576))

generator = torch.manual_seed(42)
frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0]

export_to_video(frames, "generated.mp4", fps=7)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">"source image of a rocket"</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/output_rocket.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">"generated video from source image"</figcaption>
  </div>
</div>

## torch.compile

You can gain a 20-25% speedup at the expense of slightly increased memory by [compiling](../optimization/fp16#torchcompile) the UNet.

```diff
- pipe.enable_model_cpu_offload()
+ pipe.to("cuda")
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```

## Reduce memory usage

Video generation is very memory intensive because you're essentially generating `num_frames` all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement:

- enable model offloading: each component of the pipeline is offloaded to the CPU once it's not needed anymore.
- enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size.
- reduce `decode_chunk_size`: the VAE decodes frames in chunks instead of decoding them all together. Setting `decode_chunk_size=1` decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering.

```diff
- pipe.enable_model_cpu_offload()
- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0]
+ pipe.enable_model_cpu_offload()
+ pipe.unet.enable_forward_chunking()
+ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0]
```

Using all these tricks together should lower the memory requirement to less than 8GB VRAM.

## Micro-conditioning

Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video:

- `fps`: the frames per second of the generated video.
- `motion_bucket_id`: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video.
- `noise_aug_strength`: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video.

For example, to generate a video with more motion, use the `motion_bucket_id` and `noise_aug_strength` micro-conditioning parameters:

```python
import torch

from diffusers import StableVideoDiffusionPipeline
from diffusers.utils import load_image, export_to_video

pipe = StableVideoDiffusionPipeline.from_pretrained(
  "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16"
)
pipe.enable_model_cpu_offload()

# Load the conditioning image
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png")
image = image.resize((1024, 576))

generator = torch.manual_seed(42)
frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0]
export_to_video(frames, "generated.mp4", fps=7)
```

![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/output_rocket_with_conditions.gif)


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/svd.md" />

### T2I-Adapter
https://huggingface.co/docs/diffusers/main/using-diffusers/t2i_adapter.md

# T2I-Adapter

[T2I-Adapter](https://huggingface.co/papers/2302.08453) is an adapter that enables controllable generation like [ControlNet](./controlnet). A T2I-Adapter works by learning a *mapping* between a control signal (for example, a depth map) and a pretrained model's internal knowledge. The adapter is plugged in to the base model to provide extra guidance based on the control signal during generation.

Load a T2I-Adapter conditioned on a specific control, such as canny edge, and pass it to the pipeline in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
import torch
from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, AutoencoderKL

t2i_adapter = T2IAdapter.from_pretrained(
    "TencentARC/t2i-adapter-canny-sdxl-1.0",
    torch_dtype=torch.float16,
)
```

Generate a canny image with [opencv-python](https://github.com/opencv/opencv-python).

```py
import cv2
import numpy as np
from PIL import Image
from diffusers.utils import load_image

original_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"
)

image = np.array(original_image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
```

Pass the canny image to the pipeline to generate an image.

```py
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipeline = StableDiffusionXLAdapterPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    adapter=t2i_adapter,
    vae=vae,
    torch_dtype=torch.float16,
).to("cuda")

prompt = """
A photorealistic overhead image of a cat reclining sideways in a flamingo pool floatie holding a margarita. 
The cat is floating leisurely in the pool and completely relaxed and happy.
"""

pipeline(
    prompt, 
    image=canny_image,
    num_inference_steps=100, 
    guidance_scale=10,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">original image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">canny image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2i-canny-cat-generated.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

## MultiAdapter

You can compose multiple controls, such as canny image and a depth map, with the `MultiAdapter` class.

The example below composes a canny image and depth map.

Load the control images and T2I-Adapters as a list.

```py
import torch
from diffusers.utils import load_image
from diffusers import StableDiffusionXLAdapterPipeline, AutoencoderKL, MultiAdapter, T2IAdapter

canny_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png"
)
depth_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_image.png"
)
controls = [canny_image, depth_image]
prompt = ["""
a relaxed rabbit sitting on a striped towel next to a pool with a tropical drink nearby, 
bright sunny day, vacation scene, 35mm photograph, film, professional, 4k, highly detailed
"""]

adapters = MultiAdapter(
    [
        T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16),
        T2IAdapter.from_pretrained("TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16),
    ]
)
```

Pass the adapters, prompt, and control images to [StableDiffusionXLAdapterPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/adapter#diffusers.StableDiffusionXLAdapterPipeline). Use the `adapter_conditioning_scale` parameter to determine how much weight to assign to each control.

```py
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipeline = StableDiffusionXLAdapterPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    vae=vae,
    adapter=adapters,
).to("cuda")

pipeline(
    prompt,
    image=controls,
    height=1024,
    width=1024,
    adapter_conditioning_scale=[0.7, 0.7]
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">canny image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_image.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">depth map</figcaption>
  </figure>
  <figure> 
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/t2i-multi-rabbit.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/t2i_adapter.md" />

### Batch inference
https://huggingface.co/docs/diffusers/main/using-diffusers/batched_inference.md

# Batch inference

Batch inference processes multiple prompts at a time to increase throughput. It is more efficient because processing multiple prompts at once maximizes GPU usage versus processing a single prompt and underutilizing the GPU.

The downside is increased latency because you must wait for the entire batch to complete, and more GPU memory is required for large batches.

For text-to-image, pass a list of prompts to the pipeline and for image-to-image, pass a list of images and prompts to the pipeline. The example below demonstrates batched text-to-image inference.

```py
import torch
import matplotlib.pyplot as plt
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)

prompts = [
    "Cinematic shot of a cozy coffee shop interior, warm pastel light streaming through a window where a cat rests. Shallow depth of field, glowing cups in soft focus, dreamy lofi-inspired mood, nostalgic tones, framed like a quiet film scene.",
    "Polaroid-style photograph of a cozy coffee shop interior, bathed in warm pastel light. A cat sits on the windowsill near steaming mugs. Soft, slightly faded tones and dreamy blur evoke nostalgia, a lofi mood, and the intimate, imperfect charm of instant film.",
    "Soft watercolor illustration of a cozy coffee shop interior, pastel washes of color filling the space. A cat rests peacefully on the windowsill as warm light glows through. Gentle brushstrokes create a dreamy, lofi-inspired atmosphere with whimsical textures and nostalgic calm.",
    "Isometric pixel-art illustration of a cozy coffee shop interior in detailed 8-bit style. Warm pastel light fills the space as a cat rests on the windowsill. Blocky furniture and tiny mugs add charm, low-res retro graphics enhance the nostalgic, lofi-inspired game aesthetic."
]

images = pipeline(
    prompt=prompts,
).images

fig, axes = plt.subplots(2, 2, figsize=(12, 12))
axes = axes.flatten()

for i, image in enumerate(images):
    axes[i].imshow(image)
    axes[i].set_title(f"Image {i+1}")
    axes[i].axis('off')

plt.tight_layout()
plt.show()
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/batch-inference.png"/>
</div>

To generate multiple variations of one prompt, use the `num_images_per_prompt` argument.

```py
import torch
import matplotlib.pyplot as plt
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)

prompt="""
Isometric pixel-art illustration of a cozy coffee shop interior in detailed 8-bit style. Warm pastel light fills the
space as a cat rests on the windowsill. Blocky furniture and tiny mugs add charm, low-res retro graphics enhance the
nostalgic, lofi-inspired game aesthetic.
"""

images = pipeline(
    prompt=prompt,
    num_images_per_prompt=4
).images

fig, axes = plt.subplots(2, 2, figsize=(12, 12))
axes = axes.flatten()

for i, image in enumerate(images):
    axes[i].imshow(image)
    axes[i].set_title(f"Image {i+1}")
    axes[i].axis('off')

plt.tight_layout()
plt.show()
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/batch-inference-2.png"/>
</div>

Combine both approaches to generate different variations of different prompts.

```py
images = pipeline(
    prompt=prompts,
    num_images_per_prompt=2,
).images

fig, axes = plt.subplots(2, 4, figsize=(12, 12))
axes = axes.flatten()

for i, image in enumerate(images):
    axes[i].imshow(image)
    axes[i].set_title(f"Image {i+1}")
    axes[i].axis('off')

plt.tight_layout()
plt.show()
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/batch-inference-3.png"/>
</div>

## Deterministic generation

Enable reproducible batch generation by passing a list of [Generator’s](https://pytorch.org/docs/stable/generated/torch.Generator.html) to the pipeline and tie each `Generator` to a seed to reuse it.

> [!TIP]
> Refer to the [Reproducibility](./reusing_seeds) docs to learn more about deterministic algorithms and the `Generator` object.

Use a list comprehension to iterate over the batch size specified in `range()` to create a unique `Generator` object for each image in the batch. Don't multiply the `Generator` by the batch size because that only creates one `Generator` object that is used sequentially for each image in the batch.

```py
generator = [torch.Generator(device="cuda").manual_seed(0)] * 3
```

Pass the `generator` to the pipeline.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="cuda"
)

generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(3)]
prompts = [
    "Cinematic shot of a cozy coffee shop interior, warm pastel light streaming through a window where a cat rests. Shallow depth of field, glowing cups in soft focus, dreamy lofi-inspired mood, nostalgic tones, framed like a quiet film scene.",
    "Polaroid-style photograph of a cozy coffee shop interior, bathed in warm pastel light. A cat sits on the windowsill near steaming mugs. Soft, slightly faded tones and dreamy blur evoke nostalgia, a lofi mood, and the intimate, imperfect charm of instant film.",
    "Soft watercolor illustration of a cozy coffee shop interior, pastel washes of color filling the space. A cat rests peacefully on the windowsill as warm light glows through. Gentle brushstrokes create a dreamy, lofi-inspired atmosphere with whimsical textures and nostalgic calm.",
    "Isometric pixel-art illustration of a cozy coffee shop interior in detailed 8-bit style. Warm pastel light fills the space as a cat rests on the windowsill. Blocky furniture and tiny mugs add charm, low-res retro graphics enhance the nostalgic, lofi-inspired game aesthetic."
]

images = pipeline(
    prompt=prompts,
    generator=generator
).images

fig, axes = plt.subplots(2, 2, figsize=(12, 12))
axes = axes.flatten()

for i, image in enumerate(images):
    axes[i].imshow(image)
    axes[i].set_title(f"Image {i+1}")
    axes[i].axis('off')

plt.tight_layout()
plt.show()
```

You can use this to select an image associated with a seed and iteratively improve on it by crafting a more detailed prompt.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/batched_inference.md" />

### Community pipelines and components
https://huggingface.co/docs/diffusers/main/using-diffusers/custom_pipeline_overview.md

# Community pipelines and components

Community pipelines are [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) classes that are different from the original paper implementation. They provide additional functionality or extend the original pipeline implementation.

> [!TIP]
> Check out the community pipelines in [diffusers/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) with inference and training examples for how to use them.

Community pipelines are either stored on the Hub or the Diffusers' GitHub repository. Hub pipelines are completely customizable (scheduler, models, pipeline code, etc.) while GitHub pipelines are limited to only the custom pipeline code. Further compare the two community pipeline types in the table below.

|  | GitHub | Hub |
|---|---|---|
| Usage | Same. | Same. |
| Review process | Open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging. This option is slower. | Upload directly to a Hub repository without a review. This is the fastest option. |
| Visibility | Included in the official Diffusers repository and docs. | Included on your Hub profile and relies on your own usage and promotion to gain visibility. |

## custom_pipeline

Load either community pipeline types by passing the `custom_pipeline` argument to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    custom_pipeline="pipeline_stable_diffusion_3_instruct_pix2pix",
    torch_dtype=torch.float16,
    device_map="cuda"
)
```

Add the `custom_revision` argument to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to load a community pipeline from a specific version (for example, `v0.30.0` or `main`). By default, community pipelines are loaded from the latest stable version of Diffusers.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers",
    custom_pipeline="pipeline_stable_diffusion_3_instruct_pix2pix",
    custom_revision="main"
    torch_dtype=torch.float16,
    device_map="cuda"
)
```

> [!WARNING]
> While the Hugging Face Hub [scans](https://huggingface.co/docs/hub/security-malware) files, you should still inspect the Hub pipeline code and make sure it is safe.

There are a few ways to load a community pipeline.

- Pass a path to `custom_pipeline` to load a local community pipeline. The directory must contain a `pipeline.py` file containing the pipeline class.

  ```py
  import torch
  from diffusers import DiffusionPipeline

  pipeline = DiffusionPipeline.from_pretrained(
      "stabilityai/stable-diffusion-3-medium-diffusers",
      custom_pipeline="path/to/pipeline_directory",
      torch_dtype=torch.float16,
      device_map="cuda"
  )
  ```

- The `custom_pipeline` argument is also supported by [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe), which is useful for [reusing pipelines](./loading#reuse-a-pipeline) without using additional memory. It limits the memory usage to only the largest pipeline loaded.

  ```py
  import torch
  from diffusers import DiffusionPipeline

  pipeline_sd = DiffusionPipeline.from_pretrained("emilianJR/CyberRealistic_V3", torch_dtype=torch.float16, device_map="cuda")
  pipeline_lpw = DiffusionPipeline.from_pipe(
      pipeline_sd, custom_pipeline="lpw_stable_diffusion", device_map="cuda"
  )
  ```

  The [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe) method is especially useful for loading community pipelines because many of them don't have pretrained weights. Community pipelines generally add a feature on top of an existing pipeline.

## Community components

Community components let users build pipelines with custom transformers, UNets, VAEs, and schedulers not supported by Diffusers. These components require Python module implementations. 

This section shows how users can use community components to build a community pipeline using [showlab/show-1-base](https://huggingface.co/showlab/show-1-base) as an example.

1. Load the required components, the scheduler and image processor. The text encoder is generally imported from [Transformers](https://huggingface.co/docs/transformers/index).

```python
from transformers import T5Tokenizer, T5EncoderModel, CLIPImageProcessor
from diffusers import DPMSolverMultistepScheduler

pipeline_id = "showlab/show-1-base"
tokenizer = T5Tokenizer.from_pretrained(pipeline_id, subfolder="tokenizer")
text_encoder = T5EncoderModel.from_pretrained(pipeline_id, subfolder="text_encoder")
scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler")
feature_extractor = CLIPImageProcessor.from_pretrained(pipe_id, subfolder="feature_extractor")
```

> [!WARNING]
> In steps 2 and 3, the custom [UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py) and [pipeline](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) implementation must match the format shown in their files for this example to work.

2. Load a [custom UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py) which is already implemented in [showone_unet_3d_condition.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py). The [UNet3DConditionModel](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.UNet3DConditionModel) class name is renamed to the custom implementation, `ShowOneUNet3DConditionModel`, because [UNet3DConditionModel](/docs/diffusers/main/en/api/models/unet3d-cond#diffusers.UNet3DConditionModel) already exists in Diffusers. Any components required for `ShowOneUNet3DConditionModel` class should be placed in `showone_unet_3d_condition.py`.

```python
from showone_unet_3d_condition import ShowOneUNet3DConditionModel

unet = ShowOneUNet3DConditionModel.from_pretrained(pipeline_id, subfolder="unet")
```

3. Load the custom pipeline code (already implemented in [pipeline_t2v_base_pixel.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/pipeline_t2v_base_pixel.py)). This script contains a custom `TextToVideoIFPipeline` class for generating videos from text. Like the custom UNet, any code required for `TextToVideIFPipeline` should be placed in `pipeline_t2v_base_pixel.py`.

Initialize `TextToVideoIFPipeline` with `ShowOneUNet3DConditionModel`.

```python
import torch
from pipeline_t2v_base_pixel import TextToVideoIFPipeline

pipeline = TextToVideoIFPipeline(
    unet=unet,
    text_encoder=text_encoder,
    tokenizer=tokenizer,
    scheduler=scheduler,
    feature_extractor=feature_extractor,
    device_map="cuda",
    torch_dtype=torch.float16
)
```

4. Push the pipeline to the Hub to share with the community.

```python
pipeline.push_to_hub("custom-t2v-pipeline")
```

After the pipeline is successfully pushed, make the following changes.

- Change the `_class_name` attribute in [model_index.json](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/model_index.json#L2) to `"pipeline_t2v_base_pixel"` and `"TextToVideoIFPipeline"`.
- Upload `showone_unet_3d_condition.py` to the [unet](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) subfolder.
- Upload `pipeline_t2v_base_pixel.py` to the pipeline [repository](https://huggingface.co/sayakpaul/show-1-base-with-code/tree/main).

To run inference, add the `trust_remote_code` argument while initializing the pipeline to handle all the "magic" behind the scenes.

```python
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "<change-username>/<change-id>", trust_remote_code=True, torch_dtype=torch.float16
)
```

> [!WARNING]
> As an additional precaution with `trust_remote_code=True`, we strongly encourage passing a commit hash to the `revision` argument in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to make sure the code hasn't been updated with new malicious code (unless you fully trust the model owners).

## Resources

- Take a look at Issue [#841](https://github.com/huggingface/diffusers/issues/841) for more context about why we're adding community pipelines to help everyone easily share their work without being slowed down.
- Check out the [stabilityai/japanese-stable-diffusion-xl](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl/) repository for an additional example of a community pipeline that also uses the `trust_remote_code` feature.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/custom_pipeline_overview.md" />

### Unconditional image generation
https://huggingface.co/docs/diffusers/main/using-diffusers/unconditional_image_generation.md

# Unconditional image generation


Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image.

To get started, use the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) to load the [anton-l/ddpm-butterflies-128](https://huggingface.co/anton-l/ddpm-butterflies-128) checkpoint to generate images of butterflies. The [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) downloads and caches all the model components required to generate an image.

```py
from diffusers import DiffusionPipeline

generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = generator().images[0]
image
```

> [!TIP]
> Want to generate images of something else? Take a look at the training [guide](../training/unconditional_training) to learn how to train a model to generate your own images.

The output image is a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object that can be saved:

```py
image.save("generated_image.png")
```

You can also try experimenting with the `num_inference_steps` parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it'll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality.

```py
image = generator(num_inference_steps=100).images[0]
image
```

Try out the Space below to generate an image of a butterfly!

<iframe
	src="https://stevhliu-unconditional-image-generation.hf.space"
	frameborder="0"
	width="850"
	height="500"
></iframe>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/unconditional_image_generation.md" />

### Video generation
https://huggingface.co/docs/diffusers/main/using-diffusers/text-img2vid.md

# Video generation

Video generation models extend image generation (can be considered a 1-frame video) to also process data related to space and time. Making sure all this data - text, space, time - remain consistent and aligned from frame-to-frame is a big challenge in generating long and high-resolution videos.

Modern video models tackle this challenge with the diffusion transformer (DiT) architecture. This reduces computational costs and allows more efficient scaling to larger and higher-quality image and video data.

Check out what some of these video models are capable of below.

<hfoptions id="popular models">
<hfoption id="Wan2.1">

```py
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel

text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
apply_group_offloading(text_encoder,
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="block_level",
    num_blocks_per_group=4
)
transformer.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True
)

pipeline = WanPipeline.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers",
    vae=vae,
    transformer=transformer,
    text_encoder=text_encoder,
    torch_dtype=torch.bfloat16
)
pipeline.to("cuda")

prompt = """
The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, 
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, 
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""

output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

</hfoption>
<hfoption id="HunyuanVideo">

```py
import torch
from diffusers importAutoModel, HunyuanVideoPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.utils import export_to_video

# quantize weights to int4 with bitsandbytes
pipeline_quant_config = PipelineQuantizationConfig(
  quant_backend="bitsandbytes_4bit",
  quant_kwargs={
    "load_in_4bit": True,
    "bnb_4bit_quant_type": "nf4",
    "bnb_4bit_compute_dtype": torch.bfloat16
    },
  components_to_quantize="transformer"
)

pipeline = HunyuanVideoPipeline.from_pretrained(
    "hunyuanvideo-community/HunyuanVideo",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
)

# model-offloading and tiling
pipeline.enable_model_cpu_offload()
pipeline.vae.enable_tiling()

prompt = "A fluffy teddy bear sits on a bed of soft pillows surrounded by children's toys."
video = pipeline(prompt=prompt, num_frames=61, num_inference_steps=30).frames[0]
export_to_video(video, "output.mp4", fps=15)
```

</hfoption>
<hfoption id="LTX-Video">

```py
import torch
from diffusers import LTXPipeline, AutoModel
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video

# fp8 layerwise weight-casting
transformer = AutoModel.from_pretrained(
    "Lightricks/LTX-Video",
    subfolder="transformer",
    torch_dtype=torch.bfloat16
)
transformer.enable_layerwise_casting(
    storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16
)

pipeline = LTXPipeline.from_pretrained("Lightricks/LTX-Video", transformer=transformer, torch_dtype=torch.bfloat16)

# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", use_stream=True)
apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="block_level", num_blocks_per_group=2)
apply_group_offloading(pipeline.vae, onload_device=onload_device, offload_type="leaf_level")

prompt = """
A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage
"""
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"

video = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=768,
    height=512,
    num_frames=161,
    decode_timestep=0.03,
    decode_noise_scale=0.025,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```

</hfoption>
</hfoptions>

This guide will cover video generation basics such as which parameters to configure and how to reduce their memory usage.

> [!TIP]
> If you're interested in learning more about how to use a specific model, please refer to their pipeline API model card.

## Pipeline parameters

There are several parameters to configure in the pipeline that'll affect video generation quality or speed. Experimenting with different parameter values is important for discovering the appropriate quality and speed tradeoff.

### num_frames

A frame is a still image that is played in a sequence of other frames to create motion or a video. Control the number of frames generated per second with `num_frames`. Increasing `num_frames` increases perceived motion smoothness and visual coherence, making it especially important for videos with dynamic content. A higher `num_frames` value also increases video duration.

Some video models require more specific `num_frames` values for inference. For example, [HunyuanVideoPipeline](/docs/diffusers/main/en/api/pipelines/hunyuan_video#diffusers.HunyuanVideoPipeline) recommends calculating the `num_frames` with `(4 * num_frames) +1`. Always check a pipelines API model card to see if there is a recommended value.

```py
import torch
from diffusers import LTXPipeline
from diffusers.utils import export_to_video

pipeline = LTXPipeline.from_pretrained(
    "Lightricks/LTX-Video", torch_dtype=torch.bfloat16
).to("cuda")

prompt = """
A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman 
with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The 
camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and 
natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be 
real-life footage
"""

video = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    width=768,
    height=512,
    num_frames=161,
    decode_timestep=0.03,
    decode_noise_scale=0.025,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=24)
```

### guidance_scale

Guidance scale or "cfg" controls how closely the generated frames adhere to the input conditioning (text, image or both). Increasing `guidance_scale` generates frames that resemble the input conditions more closely and includes finer details, but risk introducing artifacts and reducing output diversity. Lower `guidance_scale` values encourages looser prompt adherence and increased output variety, but details may not be as great. If it's too low, it may ignore your prompt entirely and generate random noise.

```py
import torch
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
from diffusers.utils import export_to_video

pipeline = CogVideoXPipeline.from_pretrained(
  "THUDM/CogVideoX-2b",
  torch_dtype=torch.float16
).to("cuda")

prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over
a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown, 
with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an 
oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at 
a playful environment. The scene captures the innocence and imagination of childhood, 
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""

video = pipeline(
  prompt=prompt,
  guidance_scale=6,
  num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)
```

### negative_prompt

A negative prompt is useful for excluding things you don't want to see in the generated video. It is commonly used to refine the quality and alignment of the generated video by pushing the model away from undesirable elements like "blurry, distorted, ugly". This can create cleaner and more focused videos.

```py
# pip install ftfy
import torch
from diffusers import WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
from diffusers.utils import export_to_video

vae = AutoencoderKLWan.from_pretrained(
  "Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32
)
pipeline = WanPipeline.from_pretrained(
  "Wan-AI/Wan2.1-T2V-14B-Diffusers", vae=vae, torch_dtype=torch.bfloat16
)
pipeline.scheduler = UniPCMultistepScheduler.from_config(
  pipeline.scheduler.config, flow_shift=5.0
)
pipeline.to("cuda")

pipeline.load_lora_weights("benjamin-paine/steamboat-willie-14b", adapter_name="steamboat-willie")
pipeline.set_adapters("steamboat-willie")

pipeline.enable_model_cpu_offload()

# use "steamboat willie style" to trigger the LoRA
prompt = """
steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts 
dynamic shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""

output = pipeline(
  prompt=prompt,
  num_frames=81,
  guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

## Reduce memory usage

Recent video models like [HunyuanVideoPipeline](/docs/diffusers/main/en/api/pipelines/hunyuan_video#diffusers.HunyuanVideoPipeline) and [WanPipeline](/docs/diffusers/main/en/api/pipelines/wan#diffusers.WanPipeline), which have 10B+ parameters, require a lot of memory and it often exceeds the memory available on consumer hardware. Diffusers offers several techniques for reducing the memory requirements of these large models.

> [!TIP]
> Refer to the [Reduce memory usage](../optimization/memory) guide for more details about other memory saving techniques.

One of these techniques is [group-offloading](../optimization/memory#group-offloading), which offloads groups of internal model layers (such as `torch.nn.Sequential`) to the CPU when it isn't being used. These layers are only loaded when they're needed for computation to avoid storing **all** the model components on the GPU. For a 14B parameter model like [WanPipeline](/docs/diffusers/main/en/api/pipelines/wan#diffusers.WanPipeline), group-offloading can lower the required memory to ~13GB of VRAM.

```py
# pip install ftfy
import torch
import numpy as np
from diffusers import AutoModel, WanPipeline
from diffusers.hooks.group_offloading import apply_group_offloading
from diffusers.utils import export_to_video, load_image
from transformers import UMT5EncoderModel

text_encoder = UMT5EncoderModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16)
vae = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
transformer = AutoModel.from_pretrained("Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)

# group-offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
apply_group_offloading(text_encoder,
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="block_level",
    num_blocks_per_group=4
)
transformer.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True
)

pipeline = WanPipeline.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers",
    vae=vae,
    transformer=transformer,
    text_encoder=text_encoder,
    torch_dtype=torch.bfloat16
)
pipeline.to("cuda")

prompt = """
The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, 
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, 
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""

output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

Another option for reducing memory is to consider quantizing a model, which stores the model weights in a lower precision data type. However, quantization may impact video quality depending on the specific video model. Refer to the quantization [Overivew](../quantization/overview) to learn more about the different supported quantization backends.

The example below uses [bitsandbytes](../quantization/bitsandbytes) to quantize a model.

```py
# pip install ftfy

import torch
from diffusers import WanPipeline
from diffusers import AutoModel, WanPipeline
from diffusers.quantizers import PipelineQuantizationConfig
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
from transformers import UMT5EncoderModel
from diffusers.utils import export_to_video

# quantize transformer and text encoder weights with bitsandbytes
pipeline_quant_config = PipelineQuantizationConfig(
  quant_backend="bitsandbytes_4bit",
  quant_kwargs={"load_in_4bit": True},
  components_to_quantize=["transformer", "text_encoder"]
)

vae = AutoModel.from_pretrained(
  "Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="vae", torch_dtype=torch.float32
)
pipeline = WanPipeline.from_pretrained(
  "Wan-AI/Wan2.1-T2V-14B-Diffusers", vae=vae, quantization_config=pipeline_quant_config, torch_dtype=torch.bfloat16
)
pipeline.scheduler = UniPCMultistepScheduler.from_config(
  pipeline.scheduler.config, flow_shift=5.0
)
pipeline.to("cuda")

pipeline.load_lora_weights("benjamin-paine/steamboat-willie-14b", adapter_name="steamboat-willie")
pipeline.set_adapters("steamboat-willie")

pipeline.enable_model_cpu_offload()

# use "steamboat willie style" to trigger the LoRA
prompt = """
steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts 
dynamic shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""

output = pipeline(
  prompt=prompt,
  num_frames=81,
  guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

## Inference speed

[torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial_.html) can speedup inference by using optimized kernels. Compilation takes longer the first time, but once compiled, it is much faster. It is best to compile the pipeline once, and then use the pipeline multiple times without changing anything. A change, such as in the image size, triggers recompilation.

The example below compiles the transformer in the pipeline and uses the `"max-autotune"` mode to maximize performance.

```py
import torch
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
from diffusers.utils import export_to_video

pipeline = CogVideoXPipeline.from_pretrained(
  "THUDM/CogVideoX-2b",
  torch_dtype=torch.float16
).to("cuda")

# torch.compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer = torch.compile(
    pipeline.transformer, mode="max-autotune", fullgraph=True
)

prompt = """
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. 
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. 
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, 
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.
"""

video = pipeline(
  prompt=prompt,
  guidance_scale=6,
  num_inference_steps=50
).frames[0]
export_to_video(video, "output.mp4", fps=8)
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/text-img2vid.md" />

### FreeU
https://huggingface.co/docs/diffusers/main/using-diffusers/image_quality.md

# FreeU

[FreeU](https://hf.co/papers/2309.11497) improves image details by rebalancing the UNet's backbone and skip connection weights. The skip connections can cause the model to overlook some of the backbone semantics which may lead to unnatural image details in the generated image. This technique does not require any additional training and can be applied on the fly during inference for tasks like image-to-image and text-to-video.

Use the [enable_freeu()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.StableDiffusionMixin.enable_freeu) method on your pipeline and configure the scaling factors for the backbone (`b1` and `b2`) and skip connections (`s1` and `s2`). The number after each scaling factor corresponds to the stage in the UNet where the factor is applied. Take a look at the [FreeU](https://github.com/ChenyangSi/FreeU#parameters) repository for reference hyperparameters for different models.

<hfoptions id="freeu">
<hfoption id="Stable Diffusion v1-5">

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None
).to("cuda")
pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.5, b2=1.6)
generator = torch.Generator(device="cpu").manual_seed(33)
prompt = ""
image = pipeline(prompt, generator=generator).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdv15-no-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU disabled</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdv15-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU enabled</figcaption>
  </div>
</div>

</hfoption>
<hfoption id="Stable Diffusion v2-1">

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None
).to("cuda")
pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.4, b2=1.6)
generator = torch.Generator(device="cpu").manual_seed(80)
prompt = "A squirrel eating a burger"
image = pipeline(prompt, generator=generator).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdv21-no-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU disabled</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdv21-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU enabled</figcaption>
  </div>
</div>

</hfoption>
<hfoption id="Stable Diffusion XL">

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16,
).to("cuda")
pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.3, b2=1.4)
generator = torch.Generator(device="cpu").manual_seed(13)
prompt = "A squirrel eating a burger"
image = pipeline(prompt, generator=generator).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-no-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU disabled</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-freeu.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU enabled</figcaption>
  </div>
</div>

</hfoption>
<hfoption id="Zeroscope">

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipeline = DiffusionPipeline.from_pretrained(
    "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16
).to("cuda")
# values come from https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines
pipeline.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2)
prompt = "Confident teddy bear surfer rides the wave in the tropics"
generator = torch.Generator(device="cpu").manual_seed(47)
video_frames = pipeline(prompt, generator=generator).frames[0]
export_to_video(video_frames, "teddy_bear.mp4", fps=10)
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/video-no-freeu.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU disabled</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/video-freeu.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">FreeU enabled</figcaption>
  </div>
</div>

</hfoption>
</hfoptions>

Call the [disable_freeu()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.StableDiffusionMixin.disable_freeu) method to disable FreeU.

```py
pipeline.disable_freeu()
```


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/image_quality.md" />

### Stable Diffusion XL
https://huggingface.co/docs/diffusers/main/using-diffusers/sdxl.md

# Stable Diffusion XL


[Stable Diffusion XL](https://huggingface.co/papers/2307.01952) (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways:

1. the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters
2. introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped
3. introduces a two-stage model process; the *base* model (can also be run as a standalone model) generates an image as an input to the *refiner* model which adds additional high-quality details

This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting.

Before you begin, make sure you have the following libraries installed:

```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0
```

> [!WARNING]
> We recommend installing the [invisible-watermark](https://pypi.org/project/invisible-watermark/) library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker:
>
> ```py
> pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False)
> ```

## Load model checkpoints

Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method:

```py
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
).to("cuda")
```

You can also use the [from_single_file()](/docs/diffusers/main/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file) method to load a model checkpoint stored in a single file format (`.ckpt` or `.safetensors`) from the Hub or locally:

```py
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch

pipeline = StableDiffusionXLPipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors",
    torch_dtype=torch.float16
).to("cuda")

refiner = StableDiffusionXLImg2ImgPipeline.from_single_file(
    "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16
).to("cuda")
```

## Text-to-image

For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the `height` and `width` parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work.

```py
from diffusers import AutoPipelineForText2Image
import torch

pipeline_text2image = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipeline_text2image(prompt=prompt).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" alt="generated image of an astronaut in a jungle"/>
</div>

## Image-to-image

For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with:

```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid

# use from_pipe to avoid consuming additional memory when loading a checkpoint
pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda")

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
init_image = load_image(url)
prompt = "a dog catching a frisbee in the jungle"
image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-img2img.png" alt="generated image of a dog catching a frisbee in a jungle"/>
</div>

## Inpainting

For inpainting, you'll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with.

```py
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

# use from_pipe to avoid consuming additional memory when loading a checkpoint
pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda")

img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png"

init_image = load_image(img_url)
mask_image = load_image(mask_url)

prompt = "A deep sea diver floating"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint.png" alt="generated image of a deep sea diver in a jungle"/>
</div>

## Refine image quality

SDXL includes a [refiner model](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0) specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner:

1. use the base and refiner models together to produce a refined image
2. use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained)

### Base + refiner model

When you use the base and refiner model together to generate an image, this is known as an [*ensemble of expert denoisers*](https://research.nvidia.com/labs/dir/eDiff-I/). The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model's output to the refiner model, so it should be significantly faster to run. However, you won't be able to inspect the base model's output because it still contains a large amount of noise.

As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model:

```py
from diffusers import DiffusionPipeline
import torch

base = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

refiner = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0",
    text_encoder_2=base.text_encoder_2,
    vae=base.vae,
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
).to("cuda")
```

To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the [`denoising_end`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__.denoising_end) parameter and for the refiner model, it is controlled by the [`denoising_start`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline.__call__.denoising_start) parameter.

> [!TIP]
> The `denoising_end` and `denoising_start` parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you're also using the `strength` parameter, it'll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff.

Let's set `denoising_end=0.8` so the base model performs the first 80% of denoising the **high-noise** timesteps and set `denoising_start=0.8` so the refiner model performs the last 20% of denoising the **low-noise** timesteps. The base model output should be in **latent** space instead of a PIL image.

```py
prompt = "A majestic lion jumping from a big stone at night"

image = base(
    prompt=prompt,
    num_inference_steps=40,
    denoising_end=0.8,
    output_type="latent",
).images
image = refiner(
    prompt=prompt,
    num_inference_steps=40,
    denoising_start=0.8,
    image=image,
).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_base.png" alt="generated image of a lion on a rock at night" />
    <figcaption class="mt-2 text-center text-sm text-gray-500">default base model</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_refined.png" alt="generated image of a lion on a rock at night in higher quality" />
    <figcaption class="mt-2 text-center text-sm text-gray-500">ensemble of expert denoisers</figcaption>
  </div>
</div>

The refiner model can also be used for inpainting in the [StableDiffusionXLInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline):

```py
from diffusers import StableDiffusionXLInpaintPipeline
from diffusers.utils import load_image, make_image_grid
import torch

base = StableDiffusionXLInpaintPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

refiner = StableDiffusionXLInpaintPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0",
    text_encoder_2=base.text_encoder_2,
    vae=base.vae,
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
).to("cuda")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url)
mask_image = load_image(mask_url)

prompt = "A majestic tiger sitting on a bench"
num_inference_steps = 75
high_noise_frac = 0.7

image = base(
    prompt=prompt,
    image=init_image,
    mask_image=mask_image,
    num_inference_steps=num_inference_steps,
    denoising_end=high_noise_frac,
    output_type="latent",
).images
image = refiner(
    prompt=prompt,
    image=image,
    mask_image=mask_image,
    num_inference_steps=num_inference_steps,
    denoising_start=high_noise_frac,
).images[0]
make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3)
```

This ensemble of expert denoisers method works well for all available schedulers!

### Base to refiner model

SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting.

Load the base and refiner models:

```py
from diffusers import DiffusionPipeline
import torch

base = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

refiner = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0",
    text_encoder_2=base.text_encoder_2,
    vae=base.vae,
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
).to("cuda")
```

> [!WARNING]
> You can use SDXL refiner with a different base model. For example, you can use the [Hunyuan-DiT](../api/pipelines/hunyuandit) or [PixArt-Sigma](../api/pipelines/pixart_sigma) pipelines to generate images with better prompt adherence. Once you have generated an image, you can pass it to the SDXL refiner model to enhance final generation quality.

Generate an image from the base model, and set the model output to **latent** space:

```py
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

image = base(prompt=prompt, output_type="latent").images[0]
```

Pass the generated image to the refiner model:

```py
image = refiner(prompt=prompt, image=image[None, :]).images[0]
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png" alt="generated image of an astronaut riding a green horse on Mars" />
    <figcaption class="mt-2 text-center text-sm text-gray-500">base model</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png" alt="higher quality generated image of an astronaut riding a green horse on Mars" />
    <figcaption class="mt-2 text-center text-sm text-gray-500">base model + refiner model</figcaption>
  </div>
</div>

For inpainting, load the base and the refiner model in the [StableDiffusionXLInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline), remove the `denoising_end` and `denoising_start` parameters, and choose a smaller number of inference steps for the refiner.

## Micro-conditioning

SDXL training involves several additional conditioning techniques, which are referred to as *micro-conditioning*. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images.

> [!TIP]
> You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline), [StableDiffusionXLImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline), [StableDiffusionXLInpaintPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline), and [StableDiffusionXLControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetPipeline).

### Size conditioning

There are two types of size conditioning:

- [`original_size`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__.original_size) conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use `original_size` to indicate the original image resolution. Using the default value of `(1024, 1024)` produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as `(256, 256)`, the model still generates 1024x1024 images, but they'll look like the low resolution images (simpler patterns, blurring) in the dataset.

- [`target_size`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.__call__.target_size) conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of `(1024, 1024)`, you'll get an image that resembles the composition of square images in the dataset. We recommend using the same value for `target_size` and `original_size`, but feel free to experiment with other options!

🤗 Diffusers also lets you specify negative conditions about an image's size to steer generation away from certain image resolutions:

```py
from diffusers import StableDiffusionXLPipeline
import torch

pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(
    prompt=prompt,
    negative_original_size=(512, 512),
    negative_target_size=(1024, 1024),
).images[0]
```

<div class="flex flex-col justify-center">
  <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/negative_conditions.png"/>
  <figcaption class="text-center">Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512).</figcaption>
</div>

### Crop conditioning

Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL *learns* that no cropping - coordinates `(0, 0)` - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions!

```py
from diffusers import StableDiffusionXLPipeline
import torch

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-cropped.png" alt="generated image of an astronaut in a jungle, slightly cropped"/>
</div>

You can also specify negative cropping coordinates to steer generation away from certain cropping parameters:

```py
from diffusers import StableDiffusionXLPipeline
import torch

pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(
    prompt=prompt,
    negative_original_size=(512, 512),
    negative_crops_coords_top_left=(0, 0),
    negative_target_size=(1024, 1024),
).images[0]
image
```

## Use a different prompt for each text-encoder

SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can [improve quality](https://github.com/huggingface/diffusers/issues/4004#issuecomment-1627764201). Pass your original prompt to `prompt` and the second prompt to `prompt_2` (use `negative_prompt` and `negative_prompt_2` if you're using negative prompts):

```py
from diffusers import StableDiffusionXLPipeline
import torch

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda")

# prompt is passed to OAI CLIP-ViT/L-14
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
# prompt_2 is passed to OpenCLIP-ViT/bigG-14
prompt_2 = "Van Gogh painting"
image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-double-prompt.png" alt="generated image of an astronaut in a jungle in the style of a van gogh painting"/>
</div>

The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the [SDXL textual inversion](textual_inversion_inference#stable-diffusion-xl) section.

## Optimizations

SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference.

1. Offload the model to the CPU with [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) for out-of-memory errors:

```diff
- base.to("cuda")
- refiner.to("cuda")
+ base.enable_model_cpu_offload()
+ refiner.enable_model_cpu_offload()
```

2. Use `torch.compile` for ~20% speed-up (you need `torch>=2.0`):

```diff
+ base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True)
+ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True)
```

3. Enable [xFormers](../optimization/xformers) to run SDXL if `torch<2.0`:

```diff
+ base.enable_xformers_memory_efficient_attention()
+ refiner.enable_xformers_memory_efficient_attention()
```

## Other resources

If you're interested in experimenting with a minimal version of the [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) used in SDXL, take a look at the [minSDXL](https://github.com/cloneofsimo/minSDXL) implementation which is written in PyTorch and directly compatible with 🤗 Diffusers.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/sdxl.md" />

### DiffusionPipeline
https://huggingface.co/docs/diffusers/main/using-diffusers/loading.md

# DiffusionPipeline

Diffusion models consists of multiple components like UNets or diffusion transformers (DiTs), text encoders, variational autoencoders (VAEs), and schedulers. The [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) wraps all of these components into a single easy-to-use API without giving up the flexibility to modify it's components.

This guide will show you how to load a [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline).

## Loading a pipeline

[DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) is a base pipeline class that automatically selects and returns an instance of a model's pipeline subclass, like [QwenImagePipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImagePipeline), by scanning the `model_index.json` file for the class name.

Pass a model id to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to load a pipeline.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)
```

Every model has a specific pipeline subclass that inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline). A subclass usually has a narrow focus and are task-specific. See the table below for an example.

| pipeline subclass | task |
|---|---|
| [QwenImagePipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImagePipeline) | text-to-image |
| [QwenImageImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImageImg2ImgPipeline) | image-to-image |
| [QwenImageInpaintPipeline](/docs/diffusers/main/en/api/pipelines/qwenimage#diffusers.QwenImageInpaintPipeline) | inpaint |

You could use the subclass directly by passing a model id to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)
```

> [!TIP]
> Refer to the [Single file format](./other-formats#single-file-format) docs to learn how to load single file models.

### Local pipelines

Pipelines can also be run locally. Use [snapshot_download](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/file_download#huggingface_hub.snapshot_download) to download a model repository.

```py
from huggingface_hub import snapshot_download

snapshot_download(repo_id="Qwen/Qwen-Image")
```

The model is downloaded to your [cache](../installation#cache). Pass the folder path to [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to load it.

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
  "path/to/your/cache", torch_dtype=torch.bfloat16, device_map="cuda"
)
```

The [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method won't download files from the Hub when it detects a local path. But this also means it won't download and cache any updates that have been made to the model either.

## Pipeline data types

Use the `torch_dtype` argument in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to load a model with a specific data type. This allows you to load different models in different precisions. For example, loading a large transformer model in half-precision reduces the memory required.

Pass the data type for each model as a dictionary to `torch_dtype`. Use the `default` key to set the default data type. If a model isn't in the dictionary and `default` isn't provided, it is loaded in full precision (`torch.float32`).

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
  "Qwen/Qwen-Image",
  torch_dtype={"transformer": torch.bfloat16, "default": torch.float16},
)
print(pipeline.transformer.dtype, pipeline.vae.dtype)
```

You don't need to use a dictionary if you're loading all the models in the same data type.

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
  "Qwen/Qwen-Image", torch_dtype=torch.bfloat16
)
print(pipeline.transformer.dtype, pipeline.vae.dtype)
```

## Device placement

The `device_map` argument determines individual model or pipeline placement on an accelerator like a GPU. It is especially helpful when there are multiple GPUs.

A pipeline supports two options for `device_map`, `"cuda"` and `"balanced"`. Refer to the table below to compare the placement strategies.

| parameter | description |
|---|---|
| `"cuda"` | places pipeline on a supported accelerator device like CUDA |
| `"balanced"` | evenly distributes pipeline on all GPUs |

Use the `max_memory` argument in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) to allocate a maximum amount of memory to use on each device. By default, Diffusers uses the maximum amount available.

```py
import torch
from diffusers import DiffusionPipeline

max_memory = {0: "16GB", 1: "16GB"}
pipeline = DiffusionPipeline.from_pretrained(
  "Qwen/Qwen-Image", 
  torch_dtype=torch.bfloat16,
  device_map="cuda",
)
```

The `hf_device_map` attribute allows you to access and view the `device_map`.

```py
print(pipeline.hf_device_map)
# {'unet': 1, 'vae': 1, 'safety_checker': 0, 'text_encoder': 0}
```

Reset a pipeline's `device_map` with the [reset_device_map()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.reset_device_map) method. This is necessary if you want to use methods such as `.to()`, [enable_sequential_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_sequential_cpu_offload), and [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload).

```py
pipeline.reset_device_map()
```

## Parallel loading

Large models are often [sharded](../training/distributed_inference#model-sharding) into smaller files so that they are easier to load. Diffusers supports loading shards in parallel to speed up the loading process.

Set `HF_ENABLE_PARALLEL_LOADING` to `"YES"` to enable parallel loading of shards.

The `device_map` argument should be set to `"cuda"` to pre-allocate a large chunk of memory based on the model size. This substantially reduces model load time because warming up the memory allocator now avoids many smaller calls to the allocator later.

```py
import os
import torch
from diffusers import DiffusionPipeline

os.environ["HF_ENABLE_PARALLEL_LOADING"] = "YES"

pipeline = DiffusionPipeline.from_pretrained(
  "Wan-AI/Wan2.2-I2V-A14B-Diffusers", torch_dtype=torch.bfloat16, device_map="cuda"
)
```

## Replacing models in a pipeline

[DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) is flexible and accommodates loading different models or schedulers. You can experiment with different schedulers to optimize for generation speed or quality, and you can replace models with more performant ones.

The example below uses a more stable VAE version.

```py
import torch
from diffusers import DiffusionPipeline, AutoModel

vae = AutoModel.from_pretrained(
  "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
)

pipeline = DiffusionPipeline.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  vae=vae,
  torch_dtype=torch.float16,
  device_map="cuda"
)
```

## Reusing models in multiple pipelines

When working with multiple pipelines that use the same model, the [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe) method enables reusing a model instead of reloading it each time. This allows you to use multiple pipelines without increasing memory usage.

Memory usage is determined by the pipeline with the highest memory requirement regardless of the number of pipelines.

The example below loads a pipeline and then loads a second pipeline with [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe) to use [perturbed-attention guidance (PAG)](../api/pipelines/pag) to improve generation quality.

> [!WARNING]
> Use [AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image) because [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) doesn't support PAG. Refer to the [AutoPipeline](../tutorials/autopipeline) docs to learn more. 

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline_sdxl = AutoPipelineForText2Image.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, device_map="cuda"
)
prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
image = pipeline_sdxl(prompt).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
# Max memory reserved: 10.47 GB
```

Set `enable_pag=True` in the second pipeline to enable PAG. The second pipeline uses the same amount of memory because it shares model weights with the first one.

```py
pipeline = AutoPipelineForText2Image.from_pipe(
  pipeline_sdxl, enable_pag=True
)
prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
image = pipeline(prompt).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
# Max memory reserved: 10.47 GB
```

> [!WARNING]
> Pipelines created by [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe) share the same models and *state*. Modifying the state of a model in one pipeline affects all the other pipelines that share the same model.

Some methods may not work correctly on pipelines created with [from_pipe()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pipe). For example, [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) relies on a unique model execution order, which may differ in the new pipeline. To ensure proper functionality, reapply these methods on the new pipeline.

## Safety checker

Diffusers provides a [safety checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) for older Stable Diffusion models to prevent generating harmful content. It screens the generated output against a set of hardcoded harmful concepts.

If you want to disable the safety checker, pass `safety_checker=None` in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) as shown below.

```py
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "stable-diffusion-v1-5/stable-diffusion-v1-5", safety_checker=None
)
"""
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
"""
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/loading.md" />

### Shap-E
https://huggingface.co/docs/diffusers/main/using-diffusers/shap-e.md

# Shap-E


Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps:

1. an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset
2. a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications

This guide will show you how to use Shap-E to start generating your own 3D assets!

Before you begin, make sure you have the following libraries installed:

```py
# uncomment to install the necessary libraries in Colab
#!pip install -q diffusers transformers accelerate trimesh
```

## Text-to-3D

To generate a gif of a 3D object, pass a text prompt to the [ShapEPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEPipeline). The pipeline generates a list of image frames which are used to create the 3D object.

```py
import torch
from diffusers import ShapEPipeline

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to(device)

guidance_scale = 15.0
prompt = ["A firecracker", "A birthday cupcake"]

images = pipe(
    prompt,
    guidance_scale=guidance_scale,
    num_inference_steps=64,
    frame_size=256,
).images
```

이제 [export_to_gif()](/docs/diffusers/main/en/api/utilities#diffusers.utils.export_to_gif) 함수를 사용해 이미지 프레임 리스트를 3D 오브젝트의 gif로 변환합니다.

```py
from diffusers.utils import export_to_gif

export_to_gif(images[0], "firecracker_3d.gif")
export_to_gif(images[1], "cake_3d.gif")
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/firecracker_out.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">prompt = "A firecracker"</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/cake_out.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">prompt = "A birthday cupcake"</figcaption>
  </div>
</div>

## Image-to-3D

To generate a 3D object from another image, use the [ShapEImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEImg2ImgPipeline). You can use an existing image or generate an entirely new one. Let's use the [Kandinsky 2.1](../api/pipelines/kandinsky) model to generate a new image.

```py
from diffusers import DiffusionPipeline
import torch

prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda")

prompt = "A cheeseburger, white background"

image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple()
image = pipeline(
    prompt,
    image_embeds=image_embeds,
    negative_image_embeds=negative_image_embeds,
).images[0]

image.save("burger.png")
```

Pass the cheeseburger to the [ShapEImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEImg2ImgPipeline) to generate a 3D representation of it.

```py
from PIL import Image
from diffusers import ShapEImg2ImgPipeline
from diffusers.utils import export_to_gif

pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda")

guidance_scale = 3.0
image = Image.open("burger.png").resize((256, 256))

images = pipe(
    image,
    guidance_scale=guidance_scale,
    num_inference_steps=64,
    frame_size=256,
).images

gif_path = export_to_gif(images[0], "burger_3d.gif")
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_in.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">cheeseburger</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_out.gif"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">3D cheeseburger</figcaption>
  </div>
</div>

## Generate mesh

Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you'll convert the output into a `glb` file because the 🤗 Datasets library supports mesh visualization of `glb` files which can be rendered by the [Dataset viewer](https://huggingface.co/docs/hub/datasets-viewer#dataset-preview).

You can generate mesh outputs for both the [ShapEPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEPipeline) and [ShapEImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/shap_e#diffusers.ShapEImg2ImgPipeline) by specifying the `output_type` parameter as `"mesh"`:

```py
import torch
from diffusers import ShapEPipeline

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to(device)

guidance_scale = 15.0
prompt = "A birthday cupcake"

images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images
```

Use the `export_to_ply()` function to save the mesh output as a `ply` file:

> [!TIP]
> You can optionally save the mesh output as an `obj` file with the `export_to_obj()` function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage!

```py
from diffusers.utils import export_to_ply

ply_path = export_to_ply(images[0], "3d_cake.ply")
print(f"Saved to folder: {ply_path}")
```

Then you can convert the `ply` file to a `glb` file with the trimesh library:

```py
import trimesh

mesh = trimesh.load("3d_cake.ply")
mesh_export = mesh.export("3d_cake.glb", file_type="glb")
```

By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform:

```py
import trimesh
import numpy as np

mesh = trimesh.load("3d_cake.ply")
rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0])
mesh = mesh.apply_transform(rot)
mesh_export = mesh.export("3d_cake.glb", file_type="glb")
```

Upload the mesh file to your dataset repository to visualize it with the Dataset viewer!

<div class="flex justify-center">
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/3D-cake.gif"/>
</div>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/shap-e.md" />

### ControlNet
https://huggingface.co/docs/diffusers/main/using-diffusers/controlnet.md

# ControlNet

[ControlNet](https://huggingface.co/papers/2302.05543) is an adapter that enables controllable generation such as generating an image of a cat in a *specific pose* or following the lines in a sketch of a *specific* cat. It works by adding a smaller network of "zero convolution" layers and progressively training these to avoid disrupting with the original model. The original model parameters are frozen to avoid retraining it.

A ControlNet is conditioned on extra visual information or "structural controls" (canny edge, depth maps, human pose, etc.) that can be combined with text prompts to generate images that are guided by the visual input.

> [!TIP]
> ControlNets are available to many models such as [Flux](../api/pipelines/controlnet_flux), [Hunyuan-DiT](../api/pipelines/controlnet_hunyuandit), [Stable Diffusion 3](../api/pipelines/controlnet_sd3), and more. The examples in this guide use Flux and Stable Diffusion XL.

Load a ControlNet conditioned on a specific control, such as canny edge, and pass it to the pipeline in [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

<hfoptions id="usage">
<hfoption id="text-to-image">

Generate a canny image with [opencv-python](https://github.com/opencv/opencv-python).

```py
import cv2
import numpy as np
from PIL import Image
from diffusers.utils import load_image

original_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"
)

image = np.array(original_image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
```

Pass the canny image to the pipeline. Use the `controlnet_conditioning_scale` parameter to determine how much weight to assign to the control.

```py
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetPipeline, FluxControlNetModel

controlnet = FluxControlNetModel.from_pretrained(
    "InstantX/FLUX.1-dev-Controlnet-Canny", torch_dtype=torch.bfloat16
)
pipeline = FluxControlNetPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", controlnet=controlnet, torch_dtype=torch.bfloat16
).to("cuda")

prompt = """
A photorealistic overhead image of a cat reclining sideways in a flamingo pool floatie holding a margarita. 
The cat is floating leisurely in the pool and completely relaxed and happy.
"""

pipeline(
    prompt, 
    control_image=canny_image,
    controlnet_conditioning_scale=0.5,
    num_inference_steps=50, 
    guidance_scale=3.5,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">original image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">canny image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat-generated.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>


</hfoption>
<hfoption id="image-to-image">

Generate a depth map with a depth estimation pipeline from Transformers.

```py
import torch
import numpy as np
from PIL import Image
from transformers import DPTImageProcessor, DPTForDepthEstimation
from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL
from diffusers.utils import load_image


depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda")
feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")

def get_depth_map(image):
    image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda")
    with torch.no_grad(), torch.autocast("cuda"):
        depth_map = depth_estimator(image).predicted_depth

    depth_map = torch.nn.functional.interpolate(
        depth_map.unsqueeze(1),
        size=(1024, 1024),
        mode="bicubic",
        align_corners=False,
    )
    depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_map = (depth_map - depth_min) / (depth_max - depth_min)
    image = torch.cat([depth_map] * 3, dim=1)
    image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
    image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
    return image

depth_image = get_depth_map(image)
```

Pass the depth map to the pipeline. Use the `controlnet_conditioning_scale` parameter to determine how much weight to assign to the control.

```py
controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-depth-sdxl-1.0-small",
    torch_dtype=torch.float16,
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipeline = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    vae=vae,
    torch_dtype=torch.float16,
).to("cuda")

prompt = """
A photorealistic overhead image of a cat reclining sideways in a flamingo pool floatie holding a margarita. 
The cat is floating leisurely in the pool and completely relaxed and happy.
"""
image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"
).resize((1024, 1024))
controlnet_conditioning_scale = 0.5 
pipeline(
    prompt,
    image=image,
    control_image=depth_image,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    strength=0.99,
    num_inference_steps=100,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">original image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_image.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">depth map</figcaption>
  </figure>
  <figure> 
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_depth_cat.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

</hfoption>
<hfoption id="inpainting">

Generate a mask image and convert it to a tensor to mark the pixels in the original image as masked if the corresponding pixel in the mask image is over a certain threshold.

```py
import cv2
import torch
import numpy as np
from PIL import Image
from diffusers.utils import load_image
from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel

init_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png"
)
init_image = init_image.resize((1024, 1024))
mask_image = load_image(
    "/content/cat_mask.png"
)
mask_image = mask_image.resize((1024, 1024))

def make_canny_condition(image):
    image = np.array(image)
    image = cv2.Canny(image, 100, 200)
    image = image[:, :, None]
    image = np.concatenate([image, image, image], axis=2)
    image = Image.fromarray(image)
    return image

control_image = make_canny_condition(init_image)
```

Pass the mask and control image to the pipeline. Use the `controlnet_conditioning_scale` parameter to determine how much weight to assign to the control.

```py
controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
)
pipeline = StableDiffusionXLControlNetInpaintPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16
)
pipeline(
    "a cute and fluffy bunny rabbit",
    num_inference_steps=100,
    strength=0.99,
    controlnet_conditioning_scale=0.5,
    image=init_image,
    mask_image=mask_image,
    control_image=control_image,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/non-enhanced-prompt.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">original image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat_mask.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">mask image</figcaption>
  </figure>
  <figure> 
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_rabbit_inpaint.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

</hfoption>
</hfoptions>

## Multi-ControlNet

You can compose multiple ControlNet conditionings, such as canny image and a depth map, to create a *MultiControlNet*. For the best rersults, you should mask conditionings so they don't overlap and experiment with different `controlnet_conditioning_scale` parameters to adjust how much weight is assigned to each control input.

The example below composes a canny image and depth map.

Pass the ControlNets as a list to the pipeline and resize the images to the expected input size.

```py
import torch
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL

controlnets = [
    ControlNetModel.from_pretrained(
        "diffusers/controlnet-depth-sdxl-1.0-small", torch_dtype=torch.float16
    ),
    ControlNetModel.from_pretrained(
        "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16,
    ),
]

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipeline = StableDiffusionXLControlNetPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16
).to("cuda")

prompt = """
a relaxed rabbit sitting on a striped towel next to a pool with a tropical drink nearby, 
bright sunny day, vacation scene, 35mm photograph, film, professional, 4k, highly detailed
"""
negative_prompt = "lowres, bad anatomy, worst quality, low quality, deformed, ugly"

images = [canny_image.resize((1024, 1024)), depth_image.resize((1024, 1024))]

pipeline(
    prompt,
    negative_prompt=negative_prompt,
    image=images,
    num_inference_steps=100,
    controlnet_conditioning_scale=[0.5, 0.5],
    strength=0.7,
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Generated image (prompt only)"/>
    <figcaption style="text-align: center;">canny image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/multicontrolnet_depth.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">depth map</figcaption>
  </figure>
  <figure> 
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl_multi_controlnet.png" width="300" alt="Generated image (ControlNet + prompt)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

## guess_mode

[Guess mode](https://github.com/lllyasviel/ControlNet/discussions/188) generates an image from **only** the control input (canny edge, depth map, pose, etc.) and without guidance from a prompt. It adjusts the scale of the ControlNet's output residuals by a fixed ratio depending on block depth. The earlier `DownBlock` is only scaled by `0.1` and the `MidBlock` is fully scaled by `1.0`.

```py
import torch
from diffusers.utils import load_iamge
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel

controlnet = ControlNetModel.from_pretrained(
  "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
)
pipeline = StableDiffusionXLControlNetPipeline.from_pretrained(
  "stabilityai/stable-diffusion-xl-base-1.0",
  controlnet=controlnet,
  torch_dtype=torch.float16
).to("cuda")

canny_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png")
pipeline(
  "",
  image=canny_image,
  guess_mode=True
).images[0]
```

<div style="display: flex; gap: 10px; justify-content: space-around; align-items: flex-end;">
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/canny-cat.png" width="300" alt="Control image (Canny edges)"/>
    <figcaption style="text-align: center;">canny image</figcaption>
  </figure>
  <figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guess_mode.png" width="300" alt="Generated image (Guess mode)"/>
    <figcaption style="text-align: center;">generated image</figcaption>
  </figure>
</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/controlnet.md" />

### Latent Consistency Model
https://huggingface.co/docs/diffusers/main/using-diffusers/inference_with_lcm.md

# Latent Consistency Model


[Latent Consistency Models (LCMs)](https://hf.co/papers/2310.04378) enable fast high-quality image generation by directly predicting the reverse diffusion process in the latent rather than pixel space. In other words, LCMs try to predict the noiseless image from the noisy image in contrast to typical diffusion models that iteratively remove noise from the noisy image. By avoiding the iterative sampling process, LCMs are able to generate high-quality images in 2-4 steps instead of 20-30 steps.

LCMs are distilled from pretrained models which requires ~32 hours of A100 compute. To speed this up, [LCM-LoRAs](https://hf.co/papers/2311.05556) train a [LoRA adapter](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) which have much fewer parameters to train compared to the full model. The LCM-LoRA can be plugged into a diffusion model once it has been trained.

This guide will show you how to use LCMs and LCM-LoRAs for fast inference on tasks and how to use them with other adapters like ControlNet or T2I-Adapter.

> [!TIP]
> LCMs and LCM-LoRAs are available for Stable Diffusion v1.5, Stable Diffusion XL, and the SSD-1B model. You can find their checkpoints on the [Latent Consistency](https://hf.co/collections/latent-consistency/latent-consistency-models-weights-654ce61a95edd6dffccef6a8) Collections.

## Text-to-image

<hfoptions id="lcm-text2img">
<hfoption id="LCM">

To use LCMs, you need to load the LCM checkpoint for your supported model into [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Then you can use the pipeline as usual, and pass a text prompt to generate an image in just 4 steps.

A couple of notes to keep in mind when using LCMs are:

* Typically, batch size is doubled inside the pipeline for classifier-free guidance. But LCM applies guidance with guidance embeddings and doesn't need to double the batch size, which leads to faster inference. The downside is that negative prompts don't work with LCM because they don't have any effect on the denoising process.
* The ideal range for `guidance_scale` is [3., 13.] because that is what the UNet was trained with. However, disabling `guidance_scale` with a value of 1.0 is also effective in most cases.

```python
from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler
import torch

unet = UNet2DConditionModel.from_pretrained(
    "latent-consistency/lcm-sdxl",
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(0)
image = pipe(
    prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0
).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_full_sdxl_t2i.png"/>
</div>

</hfoption>
<hfoption id="LCM-LoRA">

To use LCM-LoRAs, you need to replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler) and load the LCM-LoRA weights with the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method. Then you can use the pipeline as usual, and pass a text prompt to generate an image in just 4 steps.

A couple of notes to keep in mind when using LCM-LoRAs are:

* Typically, batch size is doubled inside the pipeline for classifier-free guidance. But LCM applies guidance with guidance embeddings and doesn't need to double the batch size, which leads to faster inference. The downside is that negative prompts don't work with LCM because they don't have any effect on the denoising process.
* You could use guidance with LCM-LoRAs, but it is very sensitive to high `guidance_scale` values and can lead to artifacts in the generated image. The best values we've found are between [1.0, 2.0].
* Replace [stabilityai/stable-diffusion-xl-base-1.0](https://hf.co/stabilityai/stable-diffusion-xl-base-1.0) with any finetuned model. For example, try using the [animagine-xl](https://huggingface.co/Linaqruf/animagine-xl) checkpoint to generate anime images with SDXL.

```py
import torch
from diffusers import DiffusionPipeline, LCMScheduler

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    variant="fp16",
    torch_dtype=torch.float16
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(42)
image = pipe(
    prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdxl_t2i.png"/>
</div>

</hfoption>
</hfoptions>

## Image-to-image

<hfoptions id="lcm-img2img">
<hfoption id="LCM">

To use LCMs for image-to-image, you need to load the LCM checkpoint for your supported model into [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Then you can use the pipeline as usual, and pass a text prompt and initial image to generate an image in just 4 steps.

> [!TIP]
> Experiment with different values for `num_inference_steps`, `strength`, and `guidance_scale` to get the best results.

```python
import torch
from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler
from diffusers.utils import load_image

unet = UNet2DConditionModel.from_pretrained(
    "SimianLuo/LCM_Dreamshaper_v7",
    subfolder="unet",
    torch_dtype=torch.float16,
)

pipe = AutoPipelineForImage2Image.from_pretrained(
    "Lykon/dreamshaper-7",
    unet=unet,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png")
prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"
generator = torch.manual_seed(0)
image = pipe(
    prompt,
    image=init_image,
    num_inference_steps=4,
    guidance_scale=7.5,
    strength=0.5,
    generator=generator
).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-img2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

</hfoption>
<hfoption id="LCM-LoRA">

To use LCM-LoRAs for image-to-image, you need to replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler) and load the LCM-LoRA weights with the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method. Then you can use the pipeline as usual, and pass a text prompt and initial image to generate an image in just 4 steps.

> [!TIP]
> Experiment with different values for `num_inference_steps`, `strength`, and `guidance_scale` to get the best results.

```py
import torch
from diffusers import AutoPipelineForImage2Image, LCMScheduler
from diffusers.utils import make_image_grid, load_image

pipe = AutoPipelineForImage2Image.from_pretrained(
    "Lykon/dreamshaper-7",
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png")
prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"

generator = torch.manual_seed(0)
image = pipe(
    prompt,
    image=init_image,
    num_inference_steps=4,
    guidance_scale=1,
    strength=0.6,
    generator=generator
).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-lora-img2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

</hfoption>
</hfoptions>

## Inpainting

To use LCM-LoRAs for inpainting, you need to replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler) and load the LCM-LoRA weights with the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method. Then you can use the pipeline as usual, and pass a text prompt, initial image, and mask image to generate an image in just 4 steps.

```py
import torch
from diffusers import AutoPipelineForInpainting, LCMScheduler
from diffusers.utils import load_image, make_image_grid

pipe = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting",
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")

prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
generator = torch.manual_seed(0)
image = pipe(
    prompt=prompt,
    image=init_image,
    mask_image=mask_image,
    generator=generator,
    num_inference_steps=4,
    guidance_scale=4,
).images[0]
image
```

<div class="flex gap-4">
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
  </div>
  <div>
    <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-lora-inpaint.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Adapters

LCMs are compatible with adapters like LoRA, ControlNet, T2I-Adapter, and AnimateDiff. You can bring the speed of LCMs to these adapters to generate images in a certain style or condition the model on another input like a canny image.

### LoRA

[LoRA](../tutorials/using_peft_for_inference) adapters can be rapidly finetuned to learn a new style from just a few images and plugged into a pretrained model to generate images in that style.

<hfoptions id="lcm-lora">
<hfoption id="LCM">

Load the LCM checkpoint for your supported model into [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Then you can use the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method to load the LoRA weights into the LCM and generate a styled image in a few steps.

```python
from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler
import torch

unet = UNet2DConditionModel.from_pretrained(
    "latent-consistency/lcm-sdxl",
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut")

prompt = "papercut, a cute fox"
generator = torch.manual_seed(0)
image = pipe(
    prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0
).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_full_sdx_lora_mix.png"/>
</div>

</hfoption>
<hfoption id="LCM-LoRA">

Replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Then you can use the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method to load the LCM-LoRA weights and the style LoRA you want to use. Combine both LoRA adapters with the `~loaders.UNet2DConditionLoadersMixin.set_adapters` method and generate a styled image in a few steps.

```py
import torch
from diffusers import DiffusionPipeline, LCMScheduler

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    variant="fp16",
    torch_dtype=torch.float16
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut")

pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8])

prompt = "papercut, a cute fox"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdx_lora_mix.png"/>
</div>

</hfoption>
</hfoptions>

### ControlNet

[ControlNet](./controlnet) are adapters that can be trained on a variety of inputs like canny edge, pose estimation, or depth. The ControlNet can be inserted into the pipeline to provide additional conditioning and control to the model for more accurate generation.

You can find additional ControlNet models trained on other inputs in [lllyasviel's](https://hf.co/lllyasviel) repository.

<hfoptions id="lcm-controlnet">
<hfoption id="LCM">

Load a ControlNet model trained on canny images and pass it to the [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel). Then you can load a LCM model into [StableDiffusionControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Now pass the canny image to the pipeline and generate an image.

> [!TIP]
> Experiment with different values for `num_inference_steps`, `controlnet_conditioning_scale`, `cross_attention_kwargs`, and `guidance_scale` to get the best results.

```python
import torch
import cv2
import numpy as np
from PIL import Image

from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
from diffusers.utils import load_image, make_image_grid

image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((512, 512))

image = np.array(image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    "SimianLuo/LCM_Dreamshaper_v7",
    controlnet=controlnet,
    torch_dtype=torch.float16,
    safety_checker=None,
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

generator = torch.manual_seed(0)
image = pipe(
    "the mona lisa",
    image=canny_image,
    num_inference_steps=4,
    generator=generator,
).images[0]
make_image_grid([canny_image, image], rows=1, cols=2)
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_full_sdv1-5_controlnet.png"/>
</div>

</hfoption>
<hfoption id="LCM-LoRA">

Load a ControlNet model trained on canny images and pass it to the [ControlNetModel](/docs/diffusers/main/en/api/models/controlnet#diffusers.ControlNetModel). Then you can load a Stable Diffusion v1.5 model into [StableDiffusionControlNetPipeline](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Use the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method to load the LCM-LoRA weights, and pass the canny image to the pipeline and generate an image.

> [!TIP]
> Experiment with different values for `num_inference_steps`, `controlnet_conditioning_scale`, `cross_attention_kwargs`, and `guidance_scale` to get the best results.

```py
import torch
import cv2
import numpy as np
from PIL import Image

from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
from diffusers.utils import load_image

image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((512, 512))

image = np.array(image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    controlnet=controlnet,
    torch_dtype=torch.float16,
    safety_checker=None,
    variant="fp16"
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")

generator = torch.manual_seed(0)
image = pipe(
    "the mona lisa",
    image=canny_image,
    num_inference_steps=4,
    guidance_scale=1.5,
    controlnet_conditioning_scale=0.8,
    cross_attention_kwargs={"scale": 1},
    generator=generator,
).images[0]
image
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lcm_sdv1-5_controlnet.png"/>
</div>

</hfoption>
</hfoptions>

### T2I-Adapter

[T2I-Adapter](./t2i_adapter) is an even more lightweight adapter than ControlNet, that provides an additional input to condition a pretrained model with. It is faster than ControlNet but the results may be slightly worse.

You can find additional T2I-Adapter checkpoints trained on other inputs in [TencentArc's](https://hf.co/TencentARC) repository.

<hfoptions id="lcm-t2i">
<hfoption id="LCM">

Load a T2IAdapter trained on canny images and pass it to the [StableDiffusionXLAdapterPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/adapter#diffusers.StableDiffusionXLAdapterPipeline). Then load a LCM checkpoint into [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) and replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler). Now pass the canny image to the pipeline and generate an image.

```python
import torch
import cv2
import numpy as np
from PIL import Image

from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler
from diffusers.utils import load_image, make_image_grid

# detect the canny map in low resolution to avoid high-frequency details
image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((384, 384))

image = np.array(image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image).resize((1024, 1216))

adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")

unet = UNet2DConditionModel.from_pretrained(
    "latent-consistency/lcm-sdxl",
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    unet=unet,
    adapter=adapter,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

prompt = "the mona lisa, 4k picture, high quality"
negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"

generator = torch.manual_seed(0)
image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=canny_image,
    num_inference_steps=4,
    guidance_scale=5,
    adapter_conditioning_scale=0.8,
    adapter_conditioning_factor=1,
    generator=generator,
).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-t2i.png"/>
</div>

</hfoption>
<hfoption id="LCM-LoRA">

Load a T2IAdapter trained on canny images and pass it to the [StableDiffusionXLAdapterPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/adapter#diffusers.StableDiffusionXLAdapterPipeline). Replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler), and use the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method to load the LCM-LoRA weights. Pass the canny image to the pipeline and generate an image.

```py
import torch
import cv2
import numpy as np
from PIL import Image

from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler
from diffusers.utils import load_image, make_image_grid

# detect the canny map in low resolution to avoid high-frequency details
image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((384, 384))

image = np.array(image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image).resize((1024, 1024))

adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")

pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    adapter=adapter,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")

prompt = "the mona lisa, 4k picture, high quality"
negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"

generator = torch.manual_seed(0)
image = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=canny_image,
    num_inference_steps=4,
    guidance_scale=1.5,
    adapter_conditioning_scale=0.8,
    adapter_conditioning_factor=1,
    generator=generator,
).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-lora-t2i.png"/>
</div>

</hfoption>
</hfoptions>

### AnimateDiff

[AnimateDiff](../api/pipelines/animatediff) is an adapter that adds motion to an image. It can be used with most Stable Diffusion models, effectively turning them into "video generation" models. Generating good results with a video model usually requires generating multiple frames (16-24), which can be very slow with a regular Stable Diffusion model. LCM-LoRA can speed up this process by only taking 4-8 steps for each frame.

Load a [AnimateDiffPipeline](/docs/diffusers/main/en/api/pipelines/animatediff#diffusers.AnimateDiffPipeline) and pass a `MotionAdapter` to it. Then replace the scheduler with the [LCMScheduler](/docs/diffusers/main/en/api/schedulers/lcm#diffusers.LCMScheduler), and combine both LoRA adapters with the `~loaders.UNet2DConditionLoadersMixin.set_adapters` method. Now you can pass a prompt to the pipeline and generate an animated image.

```py
import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5")
pipe = AnimateDiffPipeline.from_pretrained(
    "frankjoshua/toonyou_beta6",
    motion_adapter=adapter,
).to("cuda")

# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora")

pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2])

prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
generator = torch.manual_seed(0)
frames = pipe(
    prompt=prompt,
    num_inference_steps=5,
    guidance_scale=1.25,
    cross_attention_kwargs={"scale": 1},
    num_frames=24,
    generator=generator
).frames[0]
export_to_gif(frames, "animation.gif")
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm-lora-animatediff.gif"/>
</div>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/inference_with_lcm.md" />

### Trajectory Consistency Distillation-LoRA
https://huggingface.co/docs/diffusers/main/using-diffusers/inference_with_tcd_lora.md

# Trajectory Consistency Distillation-LoRA

Trajectory Consistency Distillation (TCD) enables a model to generate higher quality and more detailed images with fewer steps. Moreover, owing to the effective error mitigation during the distillation process, TCD demonstrates superior performance even under conditions of large inference steps.

The major advantages of TCD are:

- Better than Teacher: TCD demonstrates superior generative quality at both small and large inference steps and exceeds the performance of [DPM-Solver++(2S)](../api/schedulers/multistep_dpm_solver) with Stable Diffusion XL (SDXL). There is no additional discriminator or LPIPS supervision included during TCD training.

- Flexible Inference Steps: The inference steps for TCD sampling can be freely adjusted without adversely affecting the image quality.

- Freely change detail level: During inference, the level of detail in the image can be adjusted with a single hyperparameter, *gamma*.

> [!TIP]
> For more technical details of TCD, please refer to the [paper](https://huggingface.co/papers/2402.19159) or official [project page](https://mhh0318.github.io/tcd/).

For large models like SDXL, TCD is trained with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) to reduce memory usage. This is also useful because you can reuse LoRAs between different finetuned models, as long as they share the same base model, without further training.



This guide will show you how to perform inference with TCD-LoRAs for a variety of tasks like text-to-image and inpainting, as well as how you can easily combine TCD-LoRAs with other adapters. Choose one of the supported base model and it's corresponding TCD-LoRA checkpoint from the table below to get started.

| Base model                                                                                      | TCD-LoRA checkpoint                                            |
|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
| [stable-diffusion-v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)                  | [TCD-SD15](https://huggingface.co/h1t/TCD-SD15-LoRA)           |
| [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)       | [TCD-SD21-base](https://huggingface.co/h1t/TCD-SD21-base-LoRA) |
| [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) | [TCD-SDXL](https://huggingface.co/h1t/TCD-SDXL-LoRA)           |


Make sure you have [PEFT](https://github.com/huggingface/peft) installed for better LoRA support.

```bash
pip install -U peft
```

## General tasks

In this guide, let's use the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) and the [TCDScheduler](/docs/diffusers/main/en/api/schedulers/tcd#diffusers.TCDScheduler). Use the [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method to load the SDXL-compatible TCD-LoRA weights.

A few tips to keep in mind for TCD-LoRA inference are to:

- Keep the `num_inference_steps` between 4 and 50
- Set `eta` (used to control stochasticity at each step) between 0 and 1. You should use a higher `eta` when increasing the number of inference steps, but the downside is that a larger `eta` in [TCDScheduler](/docs/diffusers/main/en/api/schedulers/tcd#diffusers.TCDScheduler) leads to blurrier images. A value of 0.3 is recommended to produce good results.

<hfoptions id="tasks">
<hfoption id="text-to-image">

```python
import torch
from diffusers import StableDiffusionXLPipeline, TCDScheduler

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "Painting of the orange cat Otto von Garfield, Count of Bismarck-Schönhausen, Duke of Lauenburg, Minister-President of Prussia. Depicted wearing a Prussian Pickelhaube and eating his favorite meal - lasagna."

image = pipe(
    prompt=prompt,
    num_inference_steps=4,
    guidance_scale=0,
    eta=0.3,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/demo_image.png)

</hfoption>

<hfoption id="inpainting">

```python
import torch
from diffusers import AutoPipelineForInpainting, TCDScheduler
from diffusers.utils import load_image, make_image_grid

device = "cuda"
base_model_id = "diffusers/stable-diffusion-xl-1.0-inpainting-0.1"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = AutoPipelineForInpainting.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))

prompt = "a tiger sitting on a park bench"

image = pipe(
  prompt=prompt,
  image=init_image,
  mask_image=mask_image,
  num_inference_steps=8,
  guidance_scale=0,
  eta=0.3,
  strength=0.99,  # make sure to use `strength` below 1.0
  generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/inpainting_tcd.png)


</hfoption>
</hfoptions>

## Community models

TCD-LoRA also works with many community finetuned models and plugins. For example, load the [animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0) checkpoint which is a community finetuned version of SDXL for generating anime images.

```python
import torch
from diffusers import StableDiffusionXLPipeline, TCDScheduler

device = "cuda"
base_model_id = "cagliostrolab/animagine-xl-3.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "A man, clad in a meticulously tailored military uniform, stands with unwavering resolve. The uniform boasts intricate details, and his eyes gleam with determination. Strands of vibrant, windswept hair peek out from beneath the brim of his cap."

image = pipe(
    prompt=prompt,
    num_inference_steps=8,
    guidance_scale=0,
    eta=0.3,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/animagine_xl.png)

TCD-LoRA also supports other LoRAs trained on different styles. For example, let's load the [TheLastBen/Papercut_SDXL](https://huggingface.co/TheLastBen/Papercut_SDXL) LoRA and fuse it with the TCD-LoRA with the `~loaders.UNet2DConditionLoadersMixin.set_adapters` method.

> [!TIP]
> Check out the [Merge LoRAs](../tutorials/using_peft_for_inference#merge) guide to learn more about efficient merging methods.

```python
import torch
from diffusers import StableDiffusionXLPipeline
from scheduling_tcd import TCDScheduler

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
styled_lora_id = "TheLastBen/Papercut_SDXL"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id, adapter_name="tcd")
pipe.load_lora_weights(styled_lora_id, adapter_name="style")
pipe.set_adapters(["tcd", "style"], adapter_weights=[1.0, 1.0])

prompt = "papercut of a winter mountain, snow"

image = pipe(
    prompt=prompt,
    num_inference_steps=4,
    guidance_scale=0,
    eta=0.3,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/styled_lora.png)


## Adapters

TCD-LoRA is very versatile, and it can be combined with other adapter types like ControlNets, IP-Adapter, and AnimateDiff.

<hfoptions id="adapters">
<hfoption id="ControlNet">

### Depth ControlNet

```python
import torch
import numpy as np
from PIL import Image
from transformers import DPTImageProcessor, DPTForDepthEstimation
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler

device = "cuda"
depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to(device)
feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")

def get_depth_map(image):
    image = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device)
    with torch.no_grad(), torch.autocast(device):
        depth_map = depth_estimator(image).predicted_depth

    depth_map = torch.nn.functional.interpolate(
        depth_map.unsqueeze(1),
        size=(1024, 1024),
        mode="bicubic",
        align_corners=False,
    )
    depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_map = (depth_map - depth_min) / (depth_max - depth_min)
    image = torch.cat([depth_map] * 3, dim=1)

    image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
    image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
    return image

base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-depth-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

controlnet = ControlNetModel.from_pretrained(
    controlnet_id,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    base_model_id,
    controlnet=controlnet,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe.enable_model_cpu_offload()

pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "stormtrooper lecture, photorealistic"

image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")
depth_image = get_depth_map(image)

controlnet_conditioning_scale = 0.5  # recommended for good generalization

image = pipe(
    prompt,
    image=depth_image,
    num_inference_steps=4,
    guidance_scale=0,
    eta=0.3,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([depth_image, image], rows=1, cols=2)
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/controlnet_depth_tcd.png)

### Canny ControlNet
```python
import torch
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-canny-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

controlnet = ControlNetModel.from_pretrained(
    controlnet_id,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    base_model_id,
    controlnet=controlnet,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe.enable_model_cpu_offload()

pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "ultrarealistic shot of a furry blue bird"

canny_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png")

controlnet_conditioning_scale = 0.5  # recommended for good generalization

image = pipe(
    prompt,
    image=canny_image,
    num_inference_steps=4,
    guidance_scale=0,
    eta=0.3,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([canny_image, image], rows=1, cols=2)
```
![](https://github.com/jabir-zheng/TCD/raw/main/assets/controlnet_canny_tcd.png)

> [!TIP]
> The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one.

</hfoption>
<hfoption id="IP-Adapter">

This example shows how to use the TCD-LoRA with the [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter/tree/main) and SDXL.

```python
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers.utils import load_image, make_image_grid

from ip_adapter import IPAdapterXL
from scheduling_tcd import TCDScheduler

device = "cuda"
base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
image_encoder_path = "sdxl_models/image_encoder"
ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    variant="fp16"
)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device)

ref_image = load_image("https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/images/woman.png").resize((512, 512))

prompt = "best quality, high quality, wearing sunglasses"

image = ip_model.generate(
    pil_image=ref_image,
    prompt=prompt,
    scale=0.5,
    num_samples=1,
    num_inference_steps=4,
    guidance_scale=0,
    eta=0.3,
    seed=0,
)[0]

grid_image = make_image_grid([ref_image, image], rows=1, cols=2)
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/ip_adapter.png)



</hfoption>
<hfoption id="AnimateDiff">

`AnimateDiff` allows animating images using Stable Diffusion models. TCD-LoRA can substantially accelerate the process without degrading image quality. The quality of animation with TCD-LoRA and AnimateDiff has a more lucid outcome.

```python
import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
from scheduling_tcd import TCDScheduler
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5")
pipe = AnimateDiffPipeline.from_pretrained(
    "frankjoshua/toonyou_beta6",
    motion_adapter=adapter,
).to("cuda")

# set TCDScheduler
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

# load TCD LoRA
pipe.load_lora_weights("h1t/TCD-SD15-LoRA", adapter_name="tcd")
pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora")

pipe.set_adapters(["tcd", "motion-lora"], adapter_weights=[1.0, 1.2])

prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
generator = torch.manual_seed(0)
frames = pipe(
    prompt=prompt,
    num_inference_steps=5,
    guidance_scale=0,
    cross_attention_kwargs={"scale": 1},
    num_frames=24,
    eta=0.3,
    generator=generator
).frames[0]
export_to_gif(frames, "animation.gif")
```

![](https://github.com/jabir-zheng/TCD/raw/main/assets/animation_example.gif)

</hfoption>
</hfoptions>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/inference_with_tcd_lora.md" />

### Text-guided depth-to-image generation
https://huggingface.co/docs/diffusers/main/using-diffusers/depth2img.md

# Text-guided depth-to-image generation


The [StableDiffusionDepth2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/depth2img#diffusers.StableDiffusionDepth2ImgPipeline) lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a `depth_map` to preserve the image structure. If no `depth_map` is provided, the pipeline automatically predicts the depth via an integrated [depth-estimation model](https://github.com/isl-org/MiDaS).

Start by creating an instance of the [StableDiffusionDepth2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/depth2img#diffusers.StableDiffusionDepth2ImgPipeline):

```python
import torch
from diffusers import StableDiffusionDepth2ImgPipeline
from diffusers.utils import load_image, make_image_grid

pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-depth",
    torch_dtype=torch.float16,
    use_safetensors=True,
).to("cuda")
```

Now pass your prompt to the pipeline. You can also pass a `negative_prompt` to prevent certain words from guiding how an image is generated:

```python
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
init_image = load_image(url)
prompt = "two tigers"
negative_prompt = "bad, deformed, ugly, bad anatomy"
image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

| Input                                                                           | Output                                                                                                                                |
|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/coco-cats.png" width="500"/> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/depth2img-tigers.png" width="500"/> |


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/depth2img.md" />

### OmniGen
https://huggingface.co/docs/diffusers/main/using-diffusers/omnigen.md

# OmniGen

OmniGen is an image generation model. Unlike existing text-to-image models, OmniGen is a single model designed to handle a variety of tasks (e.g., text-to-image, image editing, controllable generation). It has the following features:
- Minimalist model architecture, consisting of only a VAE and a transformer module, for joint modeling of text and images.
- Support for multimodal inputs. It can process any text-image mixed data as instructions for image generation, rather than relying solely on text.

For more information, please refer to the [paper](https://huggingface.co/papers/2409.11340).
This guide will walk you through using OmniGen for various tasks and use cases.

## Load model checkpoints

Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method.

```python
import torch
from diffusers import OmniGenPipeline

pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
```

## Text-to-image

For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image. 
You can try setting the `height` and `width` parameters to generate images with different size.

```python
import torch
from diffusers import OmniGenPipeline

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
image = pipe(
    prompt=prompt,
    height=1024,
    width=1024,
    guidance_scale=3,
    generator=torch.Generator(device="cpu").manual_seed(111),
).images[0]
image.save("output.png")
```

<div class="flex justify-center">
    <img src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png" alt="generated image"/>
</div>

## Image edit

OmniGen supports multimodal inputs. 
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image. 
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.

```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image 

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(222)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">edited image</figcaption>
  </div>
</div>

OmniGen has some interesting features, such as visual reasoning, as shown in the example below.

```python
prompt="If the woman is thirsty, what should she take? Find it in the image and highlight it in blue. <img><|image_1|></img>"
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png")]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(0)
).images[0]
image.save("output.png")
```

<div class="flex justify-center">
    <img src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/reasoning.png" alt="generated image"/>
</div>

## Controllable generation

OmniGen can handle several classic computer vision tasks. As shown below, OmniGen can detect human skeletons in input images, which can be used as control conditions to generate new images.

```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image 

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt="Detect the skeleton of human in this image: <img><|image_1|></img>"
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png")]
image1 = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(333)
).images[0]
image1.save("image1.png")

prompt="Generate a new photo using the following picture and text as conditions: <img><|image_1|></img>\n A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/skeletal.png")]
image2 = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(333)
).images[0]
image2.save("image2.png")
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/skeletal.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">detected skeleton</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/skeletal2img.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">skeleton to image</figcaption>
  </div>
</div>


OmniGen can also directly use relevant information from input images to generate new images.

```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image 

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt="Following the pose of this image <img><|image_1|></img>, generate a new photo: A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png")]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    guidance_scale=2, 
    img_guidance_scale=1.6,
    use_input_image_size_as_output=True,
    generator=torch.Generator(device="cpu").manual_seed(0)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/same_pose.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## ID and object preserving

OmniGen can generate multiple images based on the people and objects in the input image and supports inputting multiple images simultaneously. 
Additionally, OmniGen can extract desired objects from an image containing multiple objects based on instructions.

```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image 

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt="A man and a woman are sitting at a classroom desk. The man is the man with yellow hair in <img><|image_1|></img>. The woman is the woman on the left of <img><|image_2|></img>"
input_image_1 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/3.png")
input_image_2 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/4.png")
input_images=[input_image_1, input_image_2]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    height=1024,
    width=1024,
    guidance_scale=2.5, 
    img_guidance_scale=1.6,
    generator=torch.Generator(device="cpu").manual_seed(666)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/3.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">input_image_1</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/4.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">input_image_2</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/id2.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

```py
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image 

pipe = OmniGenPipeline.from_pretrained(
    "Shitao/OmniGen-v1-diffusers",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

prompt="A woman is walking down the street, wearing a white long-sleeve blouse with lace details on the sleeves, paired with a blue pleated skirt. The woman is <img><|image_1|></img>. The long-sleeve blouse and a pleated skirt are <img><|image_2|></img>."
input_image_1 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/emma.jpeg")
input_image_2 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/dress.jpg")
input_images=[input_image_1, input_image_2]
image = pipe(
    prompt=prompt, 
    input_images=input_images, 
    height=1024,
    width=1024,
    guidance_scale=2.5, 
    img_guidance_scale=1.6,
    generator=torch.Generator(device="cpu").manual_seed(666)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/emma.jpeg"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">person image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/dress.jpg"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">clothe image</figcaption>
  </div>
  <div class="flex-1">
    <img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/tryon.png"/>
    <figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
  </div>
</div>

## Optimization when using multiple images 

For text-to-image task, OmniGen requires minimal memory and time costs (9GB memory and 31s for a 1024x1024 image on A800 GPU). 
However, when using input images, the computational cost increases. 

Here are some guidelines to help you reduce computational costs when using multiple images. The experiments are conducted on an A800 GPU with two input images.

Like other pipelines, you can reduce memory usage by offloading the model: `pipe.enable_model_cpu_offload()` or `pipe.enable_sequential_cpu_offload() `. 
In OmniGen, you can also decrease computational overhead by reducing the `max_input_image_size`. 
The memory consumption for different image sizes is shown in the table below:

| Method                    | Memory Usage |
|---------------------------|--------------|
| max_input_image_size=1024 | 40GB         |
| max_input_image_size=512  | 17GB         |
| max_input_image_size=256  | 14GB         |



<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/omnigen.md" />

### Textual Inversion
https://huggingface.co/docs/diffusers/main/using-diffusers/textual_inversion_inference.md

# Textual Inversion

[Textual Inversion](https://huggingface.co/papers/2208.01618) is a method for generating personalized images of a concept. It works by fine-tuning a models word embeddings on 3-5 images of the concept (for example, pixel art) that is associated with a unique token (`<sks>`). This allows you to use the `<sks>` token in your prompt to trigger the model to generate pixel art images.

Textual Inversion weights are very lightweight and typically only a few KBs because they're only word embeddings. However, this also means the word embeddings need to be loaded after loading a model with [from_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16
).to("cuda")
```

Load the word embeddings with [load_textual_inversion()](/docs/diffusers/main/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion) and include the unique token in the prompt to activate its generation.

```py
pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork")
prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, <gta5-artwork> style"
pipeline(prompt).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_txt_embed.png" />
</div>

Textual Inversion can also be trained to learn *negative embeddings* to steer generation away from unwanted characteristics such as "blurry" or "ugly". It is useful for improving image quality.

EasyNegative is a widely used negative embedding that contains multiple learned negative concepts. Load the negative embeddings and specify the file name and token associated with the negative embeddings. Pass the token to `negative_prompt` in your pipeline to activate it.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_textual_inversion(
    "EvilEngine/easynegative",
    weight_name="easynegative.safetensors",
    token="easynegative"
)
prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration"
negative_prompt = "easynegative"
pipeline(prompt, negative_prompt).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png" />
</div>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/using-diffusers/textual_inversion_inference.md" />

### Evaluating Diffusion Models
https://huggingface.co/docs/diffusers/main/conceptual/evaluation.md

# Evaluating Diffusion Models

<a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/evaluation.ipynb">
    <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>

> [!TIP]
> This document has now grown outdated given the emergence of existing evaluation frameworks for diffusion models for image generation. Please check
> out works like [HEIM](https://crfm.stanford.edu/helm/heim/latest/), [T2I-Compbench](https://huggingface.co/papers/2307.06350),
> [GenEval](https://huggingface.co/papers/2310.11513).

Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other?

Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision.
However, quantitative metrics don't necessarily correspond to image quality. So, usually, a combination
of both qualitative and quantitative evaluations provides a stronger signal when choosing one model
over the other.

In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside `diffusers`.

The methods shown in this document can also be used to evaluate different [noise schedulers](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview) keeping the underlying generation model fixed.

## Scenarios

We cover Diffusion models with the following pipelines:

- Text-guided image generation (such as the [`StableDiffusionPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img)).
- Text-guided image generation, additionally conditioned on an input image (such as the [`StableDiffusionImg2ImgPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/img2img) and [`StableDiffusionInstructPix2PixPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pix2pix)).
- Class-conditioned image generation models (such as the [`DiTPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/dit)).

## Qualitative Evaluation

Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics.
DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by [Imagen](https://imagen.research.google/) and [Parti](https://parti.research.google/) respectively.

From the [official Parti website](https://parti.research.google/):

> PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects.

![parti-prompts](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts.png)

PartiPrompts has the following columns:

- Prompt
- Category of the prompt (such as “Abstract”, “World Knowledge”, etc.)
- Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.)

These benchmarks allow for side-by-side human evaluation of different image generation models.

For this, the 🧨 Diffusers team has built **Open Parti Prompts**, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models:
- [Open Parti Prompts Game](https://huggingface.co/spaces/OpenGenAI/open-parti-prompts): For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best.
- [Open Parti Prompts Leaderboard](https://huggingface.co/spaces/OpenGenAI/parti-prompts-leaderboard): The leaderboard comparing the currently best open-sourced diffusion models to each other.

To manually compare images, let’s see how we can use `diffusers` on a couple of PartiPrompts.

Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a [dataset](https://huggingface.co/datasets/nateraw/parti-prompts).

```python
from datasets import load_dataset

# prompts = load_dataset("nateraw/parti-prompts", split="train")
# prompts = prompts.shuffle()
# sample_prompts = [prompts[i]["Prompt"] for i in range(5)]

# Fixing these sample prompts in the interest of reproducibility.
sample_prompts = [
    "a corgi",
    "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky",
    "a car with no windows",
    "a cube made of porcupine",
    'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.',
]
```

Now we can use these prompts to generate some images using Stable Diffusion ([v1-4 checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4)):

```python
import torch

seed = 0
generator = torch.manual_seed(seed)

images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images
```

![parti-prompts-14](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-14.png)

We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)), yields:

![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png)

Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For
more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers.

> [!TIP]
> It is useful to look at some inference samples while a model is training to measure the
> training progress. In our [training scripts](https://github.com/huggingface/diffusers/tree/main/examples/), we support this utility with additional support for
> logging to TensorBoard and Weights & Biases.

## Quantitative Evaluation

In this section, we will walk you through how to evaluate three different diffusion pipelines using:

- CLIP score
- CLIP directional similarity
- FID

### Text-guided image generation

[CLIP score](https://huggingface.co/papers/2104.08718) measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept "compatibility". Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement.

Let's first load a [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline):

```python
from diffusers import StableDiffusionPipeline
import torch

model_ckpt = "CompVis/stable-diffusion-v1-4"
sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda")
```

Generate some images with multiple prompts:

```python
prompts = [
    "a photo of an astronaut riding a horse on mars",
    "A high tech solarpunk utopia in the Amazon rainforest",
    "A pikachu fine dining with a view to the Eiffel Tower",
    "A mecha robot in a favela in expressionist style",
    "an insect robot preparing a delicious meal",
    "A small cabin on top of a snowy mountain in the style of Disney, artstation",
]

images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images

print(images.shape)
# (6, 512, 512, 3)
```

And then, we calculate the CLIP score.

```python
from torchmetrics.functional.multimodal import clip_score
from functools import partial

clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16")

def calculate_clip_score(images, prompts):
    images_int = (images * 255).astype("uint8")
    clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach()
    return round(float(clip_score), 4)

sd_clip_score = calculate_clip_score(images, prompts)
print(f"CLIP score: {sd_clip_score}")
# CLIP score: 35.7038
```

In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt.

Now, if we wanted to compare two checkpoints compatible with the [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) we should pass a generator while calling the pipeline. First, we generate images with a
fixed seed with the [v1-4 Stable Diffusion checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4):

```python
seed = 0
generator = torch.manual_seed(seed)

images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
```

Then we load the [v1-5 checkpoint](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) to generate images:

```python
model_ckpt_1_5 = "stable-diffusion-v1-5/stable-diffusion-v1-5"
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=torch.float16).to("cuda")

images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
```

And finally, we compare their CLIP scores:

```python
sd_clip_score_1_4 = calculate_clip_score(images, prompts)
print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}")
# CLIP Score with v-1-4: 34.9102

sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts)
print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}")
# CLIP Score with v-1-5: 36.2137
```

It seems like the [v1-5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse.

> [!WARNING]
> By construction, there are some limitations in this score. The captions in the training dataset
> were crawled from the web and extracted from `alt` and similar tags associated an image on the internet.
> They are not necessarily representative of what a human being would use to describe an image. Hence we
> had to "engineer" some prompts here.

### Image-conditioned text-to-image generation

In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the [StableDiffusionInstructPix2PixPipeline](/docs/diffusers/main/en/api/pipelines/pix2pix#diffusers.StableDiffusionInstructPix2PixPipeline), as an example. It takes an edit instruction as an input prompt and an input image to be edited.

Here is one example:

![edit-instruction](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-instruction.png)

One strategy to evaluate such a model is to measure the consistency of the change between the two images (in [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) space) with the change between the two image captions (as shown in [CLIP-Guided Domain Adaptation of Image Generators](https://huggingface.co/papers/2108.00946)). This is referred to as the "**CLIP directional similarity**".

- Caption 1 corresponds to the input image (image 1) that is to be edited.
- Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction.

Following is a pictorial overview:

![edit-consistency](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-consistency.png)

We have prepared a mini dataset to implement this metric. Let's first load the dataset.

```python
from datasets import load_dataset

dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train")
dataset.features
```

```bash
{'input': Value(dtype='string', id=None),
 'edit': Value(dtype='string', id=None),
 'output': Value(dtype='string', id=None),
 'image': Image(decode=True, id=None)}
```

Here we have:

- `input` is a caption corresponding to the `image`.
- `edit` denotes the edit instruction.
- `output` denotes the modified caption reflecting the `edit` instruction.

Let's take a look at a sample.

```python
idx = 0
print(f"Original caption: {dataset[idx]['input']}")
print(f"Edit instruction: {dataset[idx]['edit']}")
print(f"Modified caption: {dataset[idx]['output']}")
```

```bash
Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills'
Edit instruction: make the isles all white marble
Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills'
```

And here is the image:

```python
dataset[idx]["image"]
```

![edit-dataset](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-dataset.png)

We will first edit the images of our dataset with the edit instruction and compute the directional similarity.

Let's first load the [StableDiffusionInstructPix2PixPipeline](/docs/diffusers/main/en/api/pipelines/pix2pix#diffusers.StableDiffusionInstructPix2PixPipeline):

```python
from diffusers import StableDiffusionInstructPix2PixPipeline

instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained(
    "timbrooks/instruct-pix2pix", torch_dtype=torch.float16
).to("cuda")
```

Now, we perform the edits:

```python
import numpy as np


def edit_image(input_image, instruction):
    image = instruct_pix2pix_pipeline(
        instruction,
        image=input_image,
        output_type="np",
        generator=generator,
    ).images[0]
    return image

input_images = []
original_captions = []
modified_captions = []
edited_images = []

for idx in range(len(dataset)):
    input_image = dataset[idx]["image"]
    edit_instruction = dataset[idx]["edit"]
    edited_image = edit_image(input_image, edit_instruction)

    input_images.append(np.array(input_image))
    original_captions.append(dataset[idx]["input"])
    modified_captions.append(dataset[idx]["output"])
    edited_images.append(edited_image)
```

To measure the directional similarity, we first load CLIP's image and text encoders:

```python
from transformers import (
    CLIPTokenizer,
    CLIPTextModelWithProjection,
    CLIPVisionModelWithProjection,
    CLIPImageProcessor,
)

clip_id = "openai/clip-vit-large-patch14"
tokenizer = CLIPTokenizer.from_pretrained(clip_id)
text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to("cuda")
image_processor = CLIPImageProcessor.from_pretrained(clip_id)
image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to("cuda")
```

Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/transformers/model_doc/clip).

Next, we prepare a PyTorch `nn.Module` to compute directional similarity:

```python
import torch.nn as nn
import torch.nn.functional as F


class DirectionalSimilarity(nn.Module):
    def __init__(self, tokenizer, text_encoder, image_processor, image_encoder):
        super().__init__()
        self.tokenizer = tokenizer
        self.text_encoder = text_encoder
        self.image_processor = image_processor
        self.image_encoder = image_encoder

    def preprocess_image(self, image):
        image = self.image_processor(image, return_tensors="pt")["pixel_values"]
        return {"pixel_values": image.to("cuda")}

    def tokenize_text(self, text):
        inputs = self.tokenizer(
            text,
            max_length=self.tokenizer.model_max_length,
            padding="max_length",
            truncation=True,
            return_tensors="pt",
        )
        return {"input_ids": inputs.input_ids.to("cuda")}

    def encode_image(self, image):
        preprocessed_image = self.preprocess_image(image)
        image_features = self.image_encoder(**preprocessed_image).image_embeds
        image_features = image_features / image_features.norm(dim=1, keepdim=True)
        return image_features

    def encode_text(self, text):
        tokenized_text = self.tokenize_text(text)
        text_features = self.text_encoder(**tokenized_text).text_embeds
        text_features = text_features / text_features.norm(dim=1, keepdim=True)
        return text_features

    def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two):
        sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one)
        return sim_direction

    def forward(self, image_one, image_two, caption_one, caption_two):
        img_feat_one = self.encode_image(image_one)
        img_feat_two = self.encode_image(image_two)
        text_feat_one = self.encode_text(caption_one)
        text_feat_two = self.encode_text(caption_two)
        directional_similarity = self.compute_directional_similarity(
            img_feat_one, img_feat_two, text_feat_one, text_feat_two
        )
        return directional_similarity
```

Let's put `DirectionalSimilarity` to use now.

```python
dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder)
scores = []

for i in range(len(input_images)):
    original_image = input_images[i]
    original_caption = original_captions[i]
    edited_image = edited_images[i]
    modified_caption = modified_captions[i]

    similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption)
    scores.append(float(similarity_score.detach().cpu()))

print(f"CLIP directional similarity: {np.mean(scores)}")
# CLIP directional similarity: 0.0797976553440094
```

Like the CLIP Score, the higher the CLIP directional similarity, the better it is.

It should be noted that the `StableDiffusionInstructPix2PixPipeline` exposes two arguments, namely, `image_guidance_scale` and `guidance_scale` that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity.

We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do `F.cosine_similarity(img_feat_two, img_feat_one)`. For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score.

We can use these metrics for similar pipelines such as the [`StableDiffusionPix2PixZeroPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline).

> [!TIP]
> Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased.

***Extending metrics like IS, FID (discussed later), or KID can be difficult*** when the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction.

***Using the above metrics helps evaluate models that are class-conditioned. For example, [DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/dit). It was pre-trained being conditioned on the ImageNet-1k classes.***

### Class-conditioned image generation

Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://huggingface.co/papers/1706.08500)). We show how to compute it with the [`DiTPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/dit), which uses the [DiT model](https://huggingface.co/papers/2212.09748) under the hood.

FID aims to measure how similar are two datasets of images. As per [this resource](https://mmgeneration.readthedocs.io/en/latest/quick_run.html#fid):

> Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.

These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets.

Let's first download a few images from the ImageNet-1k training set:

```python
from zipfile import ZipFile
import requests


def download(url, local_filepath):
    r = requests.get(url)
    with open(local_filepath, "wb") as f:
        f.write(r.content)
    return local_filepath

dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip"
local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1])

with ZipFile(local_filepath, "r") as zipper:
    zipper.extractall(".")
```

```python
from PIL import Image
import os
import numpy as np

dataset_path = "sample-imagenet-images"
image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)])

real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths]
```

These are 10 images from the following ImageNet-1k classes: "cassette_player", "chain_saw" (x2), "church", "gas_pump" (x3), "parachute" (x2), and "tench".

<p align="center">
    <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/real-images.png" alt="real-images"><br>
    <em>Real images.</em>
</p>

Now that the images are loaded, let's apply some lightweight pre-processing on them to use them for FID calculation.

```python
from torchvision.transforms import functional as F
import torch


def preprocess_image(image):
    image = torch.tensor(image).unsqueeze(0)
    image = image.permute(0, 3, 1, 2) / 255.0
    return F.center_crop(image, (256, 256))

real_images = torch.cat([preprocess_image(image) for image in real_images])
print(real_images.shape)
# torch.Size([10, 3, 256, 256])
```

We now load the [`DiTPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/dit) to generate images conditioned on the above-mentioned classes.

```python
from diffusers import DiTPipeline, DPMSolverMultistepScheduler

dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config)
dit_pipeline = dit_pipeline.to("cuda")

seed = 0
generator = torch.manual_seed(seed)


words = [
    "cassette player",
    "chainsaw",
    "chainsaw",
    "church",
    "gas pump",
    "gas pump",
    "gas pump",
    "parachute",
    "parachute",
    "tench",
]

class_ids = dit_pipeline.get_label_ids(words)
output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np")

fake_images = output.images
fake_images = torch.tensor(fake_images)
fake_images = fake_images.permute(0, 3, 1, 2)
print(fake_images.shape)
# torch.Size([10, 3, 256, 256])
```

Now, we can compute the FID using [`torchmetrics`](https://torchmetrics.readthedocs.io/).

```python
from torchmetrics.image.fid import FrechetInceptionDistance

fid = FrechetInceptionDistance(normalize=True)
fid.update(real_images, real=True)
fid.update(fake_images, real=False)

print(f"FID: {float(fid.compute())}")
# FID: 177.7147216796875
```

The lower the FID, the better it is. Several things can influence FID here:

- Number of images (both real and fake)
- Randomness induced in the diffusion process
- Number of inference steps in the diffusion process
- The scheduler being used in the diffusion process

For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result.

> [!WARNING]
> FID results tend to be fragile as they depend on a lot of factors:
>
> * The specific Inception model used during computation.
> * The implementation accuracy of the computation.
> * The image format (not the same if we start from PNGs vs JPGs).
>
> Keeping that in mind, FID is often most useful when comparing similar runs, but it is
> hard to reproduce paper results unless the authors carefully disclose the FID
> measurement code.
>
> These points apply to other related metrics too, such as KID and IS.

As a final step, let's visually inspect the `fake_images`.

<p align="center">
    <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/fake-images.png" alt="fake-images"><br>
    <em>Fake images.</em>
</p>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/conceptual/evaluation.md" />

### Philosophy
https://huggingface.co/docs/diffusers/main/conceptual/philosophy.md

# Philosophy

🧨 Diffusers provides **state-of-the-art** pretrained diffusion models across multiple modalities.
Its purpose is to serve as a **modular toolbox** for both inference and training.

We aim at building a library that stands the test of time and therefore take API design very seriously.

In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on [PyTorch's Design Principles](https://pytorch.org/docs/stable/community/design.html#pytorch-design-philosophy). Let's go over the most important ones:

## Usability over Performance

- While Diffusers has many built-in performance-enhancing features (see [Memory and Speed](https://huggingface.co/docs/diffusers/optimization/fp16)), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library.
- Diffusers aims to be a **light-weight** package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as `accelerate`, `safetensors`, `onnx`, etc...). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages.
- Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired.

## Simple over easy

As PyTorch states, **explicit is better than implicit** and **simple is better than complex**. This design philosophy is reflected in multiple parts of the library:
- We follow PyTorch's API with methods like [`DiffusionPipeline.to`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.to) to let the user handle device management.
- Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible.
- Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers.
- Separately trained components of the diffusion pipeline, *e.g.* the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training
is very simple thanks to Diffusers' ability to separate single components of the diffusion pipeline.

## Tweakable, contributor-friendly over abstraction

For large parts of the library, Diffusers adopts an important design principle of the [Transformers library](https://github.com/huggingface/transformers), which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as [Don't repeat yourself (DRY)](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers.
Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable.
**However**, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because:
- Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions.
- Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions.
- Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel.

At Hugging Face, we call this design the **single-file policy** which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look
at [this blog post](https://huggingface.co/blog/transformers-design-philosophy).

In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don't follow this design fully for diffusion models is because almost all diffusion pipelines, such
as [DDPM](https://huggingface.co/docs/diffusers/api/pipelines/ddpm), [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview#stable-diffusion-pipelines), [unCLIP (DALL·E 2)](https://huggingface.co/docs/diffusers/api/pipelines/unclip) and [Imagen](https://imagen.research.google/) all rely on the same diffusion model, the [UNet](https://huggingface.co/docs/diffusers/api/models/unet2d-cond).

Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗.
We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️  to hear it [directly on GitHub](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).

## Design Philosophy in Details

Now, let's look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: [pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines), [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models), and [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
Let's walk through more in-detail design decisions for each class.

### Pipelines

Pipelines are designed to be easy to use (therefore do not follow [*Simple over easy*](#simple-over-easy) 100%), are not feature complete, and should loosely be seen as examples of how to use [models](#models) and [schedulers](#schedulers) for inference.

The following design principles are followed:
- Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for [`src/diffusers/pipelines/stable-diffusion`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion). If pipelines share similar functionality, one can make use of the [# Copied from mechanism](https://github.com/huggingface/diffusers/blob/125d783076e5bd9785beb05367a2d2566843a271/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L251).
- Pipelines all inherit from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline).
- Every pipeline consists of different model and scheduler components, that are documented in the [`model_index.json` file](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/model_index.json), are accessible under the same name as attributes of the pipeline and can be shared between pipelines with [`DiffusionPipeline.components`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.components) function.
- Every pipeline should be loadable via the [`DiffusionPipeline.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) function.
- Pipelines should be used **only** for inference.
- Pipelines should be very readable, self-explanatory, and easy to tweak.
- Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs.
- Pipelines are **not** intended to be feature-complete user interfaces. For feature-complete user interfaces one should rather have a look at [InvokeAI](https://github.com/invoke-ai/InvokeAI), [Diffuzers](https://github.com/abhishekkrthakur/diffuzers), and [lama-cleaner](https://github.com/Sanster/lama-cleaner).
- Every pipeline should have one and only one way to run it via a `__call__` method. The naming of the `__call__` arguments should be shared across all pipelines.
- Pipelines should be named after the task they are intended to solve.
- In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file.

### Models

Models are designed as configurable toolboxes that are natural extensions of [PyTorch's Module class](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). They only partly follow the **single-file policy**.

The following design principles are followed:
- Models correspond to **a type of model architecture**. *E.g.* the [UNet2DConditionModel](/docs/diffusers/main/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel) class is used for all UNet variations that expect 2D image inputs and are conditioned on some context.
- All models can be found in [`src/diffusers/models`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and every model architecture shall be defined in its file, e.g. [`unets/unet_2d_condition.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_condition.py), [`transformers/transformer_2d.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_2d.py), etc...
- Models **do not** follow the single-file policy and should make use of smaller model building blocks, such as [`attention.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py), [`resnet.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/resnet.py), [`embeddings.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py), etc... **Note**: This is in stark contrast to Transformers' modeling files and shows that models do not really follow the single-file policy.
- Models intend to expose complexity, just like PyTorch's `Module` class, and give clear error messages.
- Models all inherit from `ModelMixin` and `ConfigMixin`.
- Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain.
- Models should by default have the highest precision and lowest performance setting.
- To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different.
- Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and "foreseeing" future changes, *e.g.* it is usually better to add `string` "...type" arguments that can easily be extended to new future types instead of boolean `is_..._type` arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work.
- The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and
readable long-term, such as [UNet blocks](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unets/unet_2d_blocks.py) and [Attention processors](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).

### Schedulers

Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the **single-file policy**.

The following design principles are followed:
- All schedulers are found in [`src/diffusers/schedulers`](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers).
- Schedulers are **not** allowed to import from large utils files and shall be kept very self-contained.
- One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper).
- If schedulers share similar functionalities, we can make use of the `# Copied from` mechanism.
- Schedulers all inherit from `SchedulerMixin` and `ConfigMixin`.
- Schedulers can be easily swapped out with the [`ConfigMixin.from_config`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method as explained in detail [here](../using-diffusers/schedulers).
- Every scheduler has to have a `set_num_inference_steps`, and a `step` function. `set_num_inference_steps(...)` has to be called before every denoising process, *i.e.* before `step(...)` is called.
- Every scheduler exposes the timesteps to be "looped over" via a `timesteps` attribute, which is an array of timesteps the model will be called upon.
- The `step(...)` function takes a predicted model output and the "current" sample (x_t) and returns the "previous", slightly more denoised sample (x_t-1).
- Given the complexity of diffusion schedulers, the `step` function does not expose all the complexity and can be a bit of a "black box".
- In almost all cases, novel schedulers shall be implemented in a new scheduling file.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/conceptual/philosophy.md" />

### How to contribute to Diffusers 🧨
https://huggingface.co/docs/diffusers/main/conceptual/contribution.md

# How to contribute to Diffusers 🧨

We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it!

Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. <a href="https://Discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>

Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility.

We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered.

## Overview

You can contribute in many ways ranging from answering questions on issues and discussions to adding new diffusion models to the core library.

In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community.

* 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR).
* 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose) or new discussions on [the GitHub Discussions tab](https://github.com/huggingface/diffusers/discussions/new/choose).
* 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues) or discussions on [the GitHub Discussions tab](https://github.com/huggingface/diffusers/discussions).
* 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
* 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source).
* 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples).
* 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples).
* 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22).
* 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md).

As said before, **all contributions are valuable to the community**.
In the following, we will explain each contribution a bit more in detail.

For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in [Opening a pull request](#how-to-open-a-pr).

### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord

Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to):
- Reports of training or inference experiments in an attempt to share knowledge
- Presentation of personal projects
- Questions to non-official training examples
- Project proposals
- General feedback
- Paper summaries
- Asking for help on personal projects that build on top of the Diffusers library
- General questions
- Ethical questions regarding diffusion models
- ...

Every question that is asked on the forum or on Discord actively encourages the community to publicly
share knowledge and might very well help a beginner in the future who has the same question you're
having. Please do pose any questions you might have.
In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from.

**Please** keep in mind that the more effort you put into asking or answering a question, the higher
the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database.
In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accessible*, and *well-formatted/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.

**NOTE about channels**:
[*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago.
In addition, questions and answers posted in the forum can easily be linked to.
In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication.
While it will most likely take less time for you to get an answer to your question on Discord, your
question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers.

### 2. Opening new issues on the GitHub issues tab

The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of
the problems they encounter. So thank you for reporting an issue.

Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design.

In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR).

**Please consider the following guidelines when opening a new issue**:
- Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues).
- Please never report a new issue on another (related) issue. If another issue is highly related, please
open a new issue nevertheless and link to the related issue.
- Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English.
- Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version.
- Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues.

New issues usually include the following.

#### 2.1. Reproducible, minimal bug reports

A bug report should always have a reproducible code snippet and be as minimal and concise as possible.
This means in more detail:
- Narrow the bug down as much as you can, **do not just dump your whole code file**.
- Format your code.
- Do not include any external libraries except for Diffusers depending on them.
- **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue.
- Explain the issue. If the reader doesn't know what the issue is and why it is an issue, (s)he cannot solve it.
- **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell.
- If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible.

For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section.

You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&projects=&template=bug-report.yml).

#### 2.2. Feature requests

A world-class feature request addresses the following points:

1. Motivation first:
* Is it related to a problem/frustration with the library? If so, please explain
why. Providing a code snippet that demonstrates the problem is best.
* Is it related to something you would need for a project? We'd love to hear
about it!
* Is it something you worked on and think could benefit the community?
Awesome! Tell us what problem it solved for you.
2. Write a *full paragraph* describing the feature;
3. Provide a **code snippet** that demonstrates its future use;
4. In case this is related to a paper, please attach a link;
5. Attach any additional information (drawings, screenshots, etc.) you think may help.

You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=).

#### 2.3 Feedback

Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed.
If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions.

You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=).

#### 2.4 Technical questions

Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on
why this part of the code is difficult to understand.

You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml).

#### 2.5 Proposal to add a new model, scheduler, or pipeline

If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information:

* Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release.
* Link to any of its open-source implementation(s).
* Link to the model weights if they are available.

If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget
to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it.

You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml).

### 3. Answering issues on the GitHub issues tab

Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct.
Some tips to give a high-quality answer to an issue:
- Be as concise and minimal as possible.
- Stay on topic. An answer to the issue should concern the issue and only the issue.
- Provide links to code, papers, or other sources that prove or encourage your point.
- Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet.

Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great
help to the maintainers if you can answer such issues, encouraging the author of the issue to be
more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR).

If you have verified that the issued bug report is correct and requires a correction in the source code,
please have a look at the next sections.

For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull request](#how-to-open-a-pr) section.

### 4. Fixing a "Good first issue"

*Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already
explains how a potential solution should look so that it is easier to fix.
If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios:
- a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it.
- b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR.
- c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR.


### 5. Contribute to the documentation

A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly
valuable contribution**.

Contributing to the library can have many forms:

- Correcting spelling or grammatical errors.
- Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it.
- Correct the shape or dimensions of a docstring input or output tensor.
- Clarify documentation that is hard to understand or incorrect.
- Update outdated code examples.
- Translating the documentation to another language.

Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source).

Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally.

### 6. Contribute a community pipeline

> [!TIP]
> Read the [Community pipelines](../using-diffusers/custom_pipeline_overview#community-pipelines) guide to learn more about the difference between a GitHub and Hugging Face Hub community pipeline. If you're interested in why we have community pipelines, take a look at GitHub Issue [#841](https://github.com/huggingface/diffusers/issues/841) (basically, we can't maintain all the possible ways diffusion models can be used for inference but we also don't want to prevent the community from building them).

Contributing a community pipeline is a great way to share your creativity and work with the community. It lets you build on top of the [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) so that anyone can load and use it by setting the `custom_pipeline` parameter. This section will walk you through how to create a simple pipeline where the UNet only does a single forward pass and calls the scheduler once (a "one-step" pipeline).

1. Create a one_step_unet.py file for your community pipeline. This file can contain whatever package you want to use as long as it's installed by the user. Make sure you only have one pipeline class that inherits from [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) to load model weights and the scheduler configuration from the Hub. Add a UNet and scheduler to the `__init__` function.

    You should also add the `register_modules` function to ensure your pipeline and its components can be saved with [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained).

```py
from diffusers import DiffusionPipeline
import torch

class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
    def __init__(self, unet, scheduler):
        super().__init__()

        self.register_modules(unet=unet, scheduler=scheduler)
```

1. In the forward pass (which we recommend defining as `__call__`), you can add any feature you'd like. For the "one-step" pipeline, create a random image and call the UNet and scheduler once by setting `timestep=1`.

```py
  from diffusers import DiffusionPipeline
  import torch

  class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
      def __init__(self, unet, scheduler):
          super().__init__()

          self.register_modules(unet=unet, scheduler=scheduler)

      def __call__(self):
          image = torch.randn(
              (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
          )
          timestep = 1

          model_output = self.unet(image, timestep).sample
          scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample

          return scheduler_output
```

Now you can run the pipeline by passing a UNet and scheduler to it or load pretrained weights if the pipeline structure is identical.

```py
from diffusers import DDPMScheduler, UNet2DModel

scheduler = DDPMScheduler()
unet = UNet2DModel()

pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler)
output = pipeline()
# load pretrained weights
pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)
output = pipeline()
```

You can either share your pipeline as a GitHub community pipeline or Hub community pipeline.

<hfoptions id="pipeline type">
<hfoption id="GitHub pipeline">

Share your GitHub pipeline by opening a pull request on the Diffusers [repository](https://github.com/huggingface/diffusers) and add the one_step_unet.py file to the [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) subfolder.

</hfoption>
<hfoption id="Hub pipeline">

Share your Hub pipeline by creating a model repository on the Hub and uploading the one_step_unet.py file to it.

</hfoption>
</hfoptions>

### 7. Contribute to training examples

Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples).

We support two types of training examples:

- Official training examples
- Research training examples

Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders.
The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community.
This is because of the same reasons put forward in [6. Contribute a community pipeline](#6-contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models.
If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author.

Both official training and research examples consist of a directory that contains one or more training scripts, a `requirements.txt` file, and a `README.md` file. In order for the user to make use of the
training examples, it is required to clone the repository:

```bash
git clone https://github.com/huggingface/diffusers
```

as well as to install all additional dependencies required for training:

```bash
cd diffusers
pip install -r examples/<your-example-folder>/requirements.txt
```

Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt).

Training examples of the Diffusers library should adhere to the following philosophy:
- All the code necessary to run the examples should be found in a single Python file.
- One should be able to run the example from the command line with `python <your-example>.py --args`.
- Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials.

To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like.
We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated
with Diffusers.
Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include:
- An example command on how to run the example script as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch).
- A link to some training results (logs, models, etc.) that show what the user can expect as shown [here](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5).
- If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations).

If you are contributing to the official training examples, please also make sure to add a test to its folder such as [examples/dreambooth/test_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/test_dreambooth.py). This is not necessary for non-official training examples.

### 8. Fixing a "Good second issue"

*Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are
usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
The issue description usually gives less guidance on how to fix the issue and requires
a decent understanding of the library by the interested contributor.
If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR.
Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged.

### 9. Adding pipelines, models, schedulers

Pipelines, models, and schedulers are the most important pieces of the Diffusers library.
They provide easy access to state-of-the-art diffusion technologies and thus allow the community to
build powerful generative AI applications.

By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem.

Diffusers has a couple of open feature requests for all three components - feel free to gloss over them
if you don't know yet what specific component you would like to add:
- [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22)
- [Scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)

Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](philosophy) a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy
as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us.

Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions.

If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help.

#### Copied from mechanism

A unique and important feature to understand when adding any pipeline, model or scheduler code is the `# Copied from` mechanism. You'll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the `# Copied from` mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run `make fix-copies`.

For example, in the code example below, [StableDiffusionPipelineOutput](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is the original code and `AltDiffusionPipelineOutput` uses the `# Copied from` mechanism to copy it. The only difference is changing the class prefix from `Stable` to `Alt`.

```py
# Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt
class AltDiffusionPipelineOutput(BaseOutput):
    """
    Output class for Alt Diffusion pipelines.

    Args:
        images (`List[PIL.Image.Image]` or `np.ndarray`)
            List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
            num_channels)`.
        nsfw_content_detected (`List[bool]`)
            List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
            `None` if safety checking could not be performed.
    """
```

To learn more, read this section of the [~Don't~ Repeat Yourself*](https://huggingface.co/blog/transformers-design-philosophy#4-machine-learning-models-are-static) blog post.

## How to write a good issue

**The better your issue is written, the higher the chances that it will be quickly resolved.**

1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose).
2. **Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers".
3. **Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data.
4. **Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets.
5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better.
6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information.
7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library.

## How to write a good PR

1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged.
2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once.
3. If helpful, try to add a code snippet that displays an example of how your addition can be used.
4. The title of your pull request should be a summary of its contribution.
5. If your pull request addresses an issue, please mention the issue number in
the pull request description to make sure they are linked (and people
consulting the issue know you are working on it);
6. To indicate a work in progress please prefix the title with `[WIP]`. These
are useful to avoid duplicated work, and to differentiate it from PRs ready
to be merged;
7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue).
8. Make sure existing tests pass;
9. Add high-coverage tests. No quality testing = no merge.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`.
CircleCI does not run the slow tests, but GitHub Actions does every night!
10. All public methods must have informative docstrings that work nicely with markdown. See [`pipeline_latent_diffusion.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example.
11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
[`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files.
If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
to this dataset.

## How to open a PR

Before writing code, we strongly advise you to search through the existing PRs or
issues to make sure that nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.

You will need basic `git` proficiency to be able to contribute to
🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest
manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.

Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/83bc6c94eaeb6f7704a2a428931cf2d9ad973ae9/setup.py#L270)):

1. Fork the [repository](https://github.com/huggingface/diffusers) by
clicking on the 'Fork' button on the repository's page. This creates a copy of the code
under your GitHub user account.

2. Clone your fork to your local disk, and add the base repository as a remote:

 ```bash
 $ git clone git@github.com:<your GitHub handle>/diffusers.git
 $ cd diffusers
 $ git remote add upstream https://github.com/huggingface/diffusers.git
 ```

3. Create a new branch to hold your development changes:

 ```bash
 $ git checkout -b a-descriptive-name-for-my-changes
 ```

**Do not** work on the `main` branch.

4. Set up a development environment by running the following command in a virtual environment:

 ```bash
 $ pip install -e ".[dev]"
 ```

If you have already cloned the repo, you might need to `git pull` to get the most recent changes in the
library.

5. Develop the features on your branch.

As you work on the features, you should make sure that the test suite
passes. You should run the tests impacted by your changes like this:

 ```bash
 $ pytest tests/<TEST_TO_RUN>.py
 ```

Before you run the tests, please make sure you install the dependencies required for testing. You can do so
with this command:

 ```bash
 $ pip install -e ".[test]"
 ```

You can also run the full test suite with the following command, but it takes
a beefy machine to produce a result in a decent amount of time now that
Diffusers has grown a lot. Here is the command for it:

 ```bash
 $ make test
 ```

🧨 Diffusers relies on `black` and `isort` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:

 ```bash
 $ make style
 ```

🧨 Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality
control runs in CI, however, you can also run the same checks with:

 ```bash
 $ make quality
 ```

Once you're happy with your changes, add changed files using `git add` and
make a commit with `git commit` to record your changes locally:

 ```bash
 $ git add modified_file.py
 $ git commit -m "A descriptive message about your changes."
 ```

It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes:

 ```bash
 $ git pull upstream main
 ```

Push the changes to your account using:

 ```bash
 $ git push -u origin a-descriptive-name-for-my-changes
 ```

6. Once you are satisfied, go to the
webpage of your fork on GitHub. Click on 'Pull request' to send your changes
to the project maintainers for review.

7. It's OK if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.

### Tests

An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests).

We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, here's how to run tests with `pytest` for the library:

```bash
$ python -m pytest -n auto --dist=loadfile -s -v ./tests/
```

In fact, that's how `make test` is implemented!

You can specify a smaller set of tests in order to test only the feature
you're working on.

By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience!

```bash
$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/
```

`unittest` is fully supported, here's how to run tests with it:

```bash
$ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v
```

### Syncing forked main with upstream (HuggingFace) main

To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps:
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```bash
$ git checkout -b your-branch-for-syncing
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing
```

### Style guide

For documentation strings, 🧨 Diffusers follows the [Google style](https://google.github.io/styleguide/pyguide.html).

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/conceptual/contribution.md" />

### 🧨 Diffusers’ Ethical Guidelines
https://huggingface.co/docs/diffusers/main/conceptual/ethical_guidelines.md

# 🧨 Diffusers’ Ethical Guidelines

## Preamble

[Diffusers](https://huggingface.co/docs/diffusers/index) provides pre-trained diffusion models and serves as a modular toolbox for inference and training.

Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library.

The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups.
We will keep tracking risks and adapt the following guidelines based on the community's responsiveness and valuable feedback.


## Scope

The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns.


## Ethical guidelines

The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question.

- **Transparency**: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions.

- **Consistency**: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent.

- **Simplicity**: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent.

- **Accessibility**: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community.

- **Reproducibility**: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library.

- **Responsibility**: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology's potential risks and dangers.


## Examples of implementations: Safety features and Mechanisms

The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community's input is invaluable in ensuring these features' implementation and raising awareness with us.

- [**Community tab**](https://huggingface.co/docs/hub/repositories-pull-requests-discussions): it enables the community to discuss and better collaborate on a project.

- **Bias exploration and evaluation**: the Hugging Face team provides a [space](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer) to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations.

- **Encouraging safety in deployment**

  - [**Safe Stable Diffusion**](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_safe): It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://huggingface.co/papers/2211.05105).

  - [**Safety Checker**](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py): It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker.

- **Staged released on the Hub**: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use.

- **Licensing**: [OpenRAILs](https://huggingface.co/blog/open_rail), a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/conceptual/ethical_guidelines.md" />

### LoRA
https://huggingface.co/docs/diffusers/main/tutorials/using_peft_for_inference.md

# LoRA

[LoRA (Low-Rank Adaptation)](https://huggingface.co/papers/2106.09685) is a method for quickly training a model for a new task. It works by freezing the original model weights and adding a small number of *new* trainable parameters. This means it is significantly faster and cheaper to adapt an existing model to new tasks, such as generating images in a new style.

LoRA checkpoints are typically only a couple hundred MBs in size, so they're very lightweight and easy to store. Load these smaller set of weights into an existing base model with [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) and specify the file name.

<hfoptions id="usage">
<hfoption id="text-to-image">

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/super-cereal-sdxl-lora",
    weight_name="cereal_box_sdxl_v1.safetensors",
    adapter_name="cereal"
)
pipeline("bears, pizza bites").images[0]
```

</hfoption>
<hfoption id="text-to-video">

```py
import torch
from diffusers import LTXConditionPipeline
from diffusers.utils import export_to_video, load_image

pipeline = LTXConditionPipeline.from_pretrained(
    "Lightricks/LTX-Video-0.9.5", torch_dtype=torch.bfloat16
)

pipeline.load_lora_weights(
    "Lightricks/LTX-Video-Cakeify-LoRA",
    weight_name="ltxv_095_cakeify_lora.safetensors",
    adapter_name="cakeify"
)
pipeline.set_adapters("cakeify")

# use "CAKEIFY" to trigger the LoRA
prompt = "CAKEIFY a person using a knife to cut a cake shaped like a Pikachu plushie"
image = load_image("https://huggingface.co/Lightricks/LTX-Video-Cakeify-LoRA/resolve/main/assets/images/pikachu.png")

video = pipeline(
    prompt=prompt,
    image=image,
    width=576,
    height=576,
    num_frames=161,
    decode_timestep=0.03,
    decode_noise_scale=0.025,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "output.mp4", fps=26)
```

</hfoption>
</hfoptions>

The [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) method is the preferred way to load LoRA weights into the UNet and text encoder because it can handle cases where:

- the LoRA weights don't have separate UNet and text encoder identifiers
- the LoRA weights have separate UNet and text encoder identifiers

The [load_lora_adapter()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.load_lora_adapter) method is used to directly load a LoRA adapter at the *model-level*, as long as the model is a Diffusers model that is a subclass of `PeftAdapterMixin`. It builds and prepares the necessary model configuration for the adapter. This method also loads the LoRA adapter into the UNet.

For example, if you're only loading a LoRA into the UNet, [load_lora_adapter()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.load_lora_adapter) ignores the text encoder keys. Use the `prefix` parameter to filter and load the appropriate state dicts, `"unet"` to load.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.unet.load_lora_adapter(
    "jbilcke-hf/sdxl-cinematic-1",
    weight_name="pytorch_lora_weights.safetensors",
    adapter_name="cinematic",
    prefix="unet"
)
# use cnmt in the prompt to trigger the LoRA
pipeline("A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration").images[0]
```

## torch.compile

[torch.compile](../optimization/fp16#torchcompile) speeds up inference by compiling the PyTorch model to use optimized kernels. Before compiling, the LoRA weights need to be fused into the base model and unloaded first.

```py
import torch
from diffusers import DiffusionPipeline

# load base model and LoRA
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)

# activate LoRA and set adapter weight
pipeline.set_adapters("ikea", adapter_weights=0.7)

# fuse LoRAs and unload weights
pipeline.fuse_lora(adapter_names=["ikea"], lora_scale=1.0)
pipeline.unload_lora_weights()
```

Typically, the UNet is compiled because its the most compute intensive component of the pipeline.

```py
pipeline.unet.to(memory_format=torch.channels_last)
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)

pipeline("A bowl of ramen shaped like a cute kawaii bear").images[0]
```

Refer to the [hotswapping](#hotswapping) section to learn how to avoid recompilation when working with compiled models and multiple LoRAs.

## Weight scale

The `scale` parameter is used to control how much of a LoRA to apply. A value of `0` is equivalent to only using the base model weights and a value of `1` is equivalent to fully using the LoRA.

<hfoptions id="weight-scale">
<hfoption id="simple use case">

For simple use cases, you can pass `cross_attention_kwargs={"scale": 1.0}` to the pipeline.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/super-cereal-sdxl-lora",
    weight_name="cereal_box_sdxl_v1.safetensors",
    adapter_name="cereal"
)
pipeline("bears, pizza bites", cross_attention_kwargs={"scale": 1.0}).images[0]
```

</hfoption>
<hfoption id="finer control">

> [!WARNING]
> The [set_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters) method only scales attention weights. If a LoRA has ResNets or down and upsamplers, these components keep a scale value of `1.0`.

For finer control over each individual component of the UNet or text encoder, pass a dictionary instead. In the example below, the `"down"` block in the UNet is scaled by 0.9 and you can further specify in the `"up"` block the scales of the transformers in `"block_0"` and `"block_1"`. If a block like `"mid"` isn't specified, the default value `1.0` is used.

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/super-cereal-sdxl-lora",
    weight_name="cereal_box_sdxl_v1.safetensors",
    adapter_name="cereal"
)
scales = {
    "text_encoder": 0.5,
    "text_encoder_2": 0.5,
    "unet": {
        "down": 0.9,
        "up": {
            "block_0": 0.6,
            "block_1": [0.4, 0.8, 1.0],
        }
    }
}
pipeline.set_adapters("cereal", scales)
pipeline("bears, pizza bites").images[0]
```

</hfoption>
</hfoptions>

### Scale scheduling

Dynamically adjusting the LoRA scale during sampling gives you better control over the overall composition and layout because certain steps may benefit more from an increased or reduced scale.

The [character LoRA](https://huggingface.co/alvarobartt/ghibli-characters-flux-lora) in the example below starts with a higher scale that gradually decays over the first 20 steps to establish the character generation. In the later steps, only a scale of 0.2 is applied to avoid adding too much of the LoRA features to other parts of the image the LoRA wasn't trained on.

```py
import torch
from diffusers import FluxPipeline

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
).to("cuda")

pipelne.load_lora_weights("alvarobartt/ghibli-characters-flux-lora", "lora")

num_inference_steps = 30
lora_steps = 20
lora_scales = torch.linspace(1.5, 0.7, lora_steps).tolist()
lora_scales += [0.2] * (num_inference_steps - lora_steps + 1)

pipeline.set_adapters("lora", lora_scales[0])

def callback(pipeline: FluxPipeline, step: int, timestep: torch.LongTensor, callback_kwargs: dict):
    pipeline.set_adapters("lora", lora_scales[step + 1])
    return callback_kwargs

prompt = """
Ghibli style The Grinch, a mischievous green creature with a sly grin, peeking out from behind a snow-covered tree while plotting his antics, 
in a quaint snowy village decorated for the holidays, warm light glowing from cozy homes, with playful snowflakes dancing in the air
"""
pipeline(
    prompt=prompt,
    guidance_scale=3.0,
    num_inference_steps=num_inference_steps,
    generator=torch.Generator().manual_seed(42),
    callback_on_step_end=callback,
).images[0]
```

## Hotswapping

Hotswapping LoRAs is an efficient way to work with multiple LoRAs while avoiding accumulating memory from multiple calls to [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) and in some cases, recompilation, if a model is compiled. This workflow requires a loaded LoRA because the new LoRA weights are swapped in place for the existing loaded LoRA.

```py
import torch
from diffusers import DiffusionPipeline

# load base model and LoRAs
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
```

> [!WARNING]
> Hotswapping is unsupported for LoRAs that target the text encoder.

Set `hotswap=True` in [load_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.StableDiffusionLoraLoaderMixin.load_lora_weights) to swap the second LoRA. Use the `adapter_name` parameter to indicate which LoRA to swap (`default_0` is the default name).

```py
pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    hotswap=True,
    adapter_name="ikea"
)
```

### Compiled models

For compiled models, use [enable_lora_hotswap()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap) to avoid recompilation when hotswapping LoRAs. This method should be called *before* loading the first LoRA and `torch.compile` should be called *after* loading the first LoRA.

> [!TIP]
> The [enable_lora_hotswap()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap) method isn't always necessary if the second LoRA targets the identical LoRA ranks and scales as the first LoRA.

Within [enable_lora_hotswap()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap), the `target_rank` parameter is important for setting the rank for all LoRA adapters. Setting it to `max_rank` sets it to the highest value. For LoRAs with different ranks, you set it to a higher rank value. The default rank value is 128.

```py
import torch
from diffusers import DiffusionPipeline

# load base model and LoRAs
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
# 1. enable_lora_hotswap
pipeline.enable_lora_hotswap(target_rank=max_rank)
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
# 2. torch.compile
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)

# 3. hotswap
pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    hotswap=True,
    adapter_name="ikea"
)
```

> [!TIP]
> Move your code inside the `with torch._dynamo.config.patch(error_on_recompile=True)` context manager to detect if a model was recompiled. If a model is recompiled despite following all the steps above, please open an [issue](https://github.com/huggingface/diffusers/issues) with a reproducible example.

If you expect to varied resolutions during inference with this feature, then make sure set `dynamic=True` during compilation. Refer to [this document](../optimization/fp16#dynamic-shape-compilation) for more details.

There are still scenarios where recompulation is unavoidable, such as when the hotswapped LoRA targets more layers than the initial adapter. Try to load the LoRA that targets the most layers *first*. For more details about this limitation, refer to the PEFT [hotswapping](https://huggingface.co/docs/peft/main/en/package_reference/hotswap#peft.utils.hotswap.hotswap_adapter) docs.

<details>
<summary>Technical details of hotswapping</summary>

The [enable_lora_hotswap()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.enable_lora_hotswap) method converts the LoRA scaling factor from floats to torch.tensors and pads the shape of the weights to the largest required shape to avoid reassigning the whole attribute when the data in the weights are replaced.

This is why the `max_rank` argument is important. The results are unchanged even when the values are padded with zeros. Computation may be slower though depending on the padding size.

Since no new LoRA attributes are added, each subsequent LoRA is only allowed to target the same layers, or subset of layers, the first LoRA targets. Choosing the LoRA loading order is important because if the LoRAs target disjoint layers, you may end up creating a dummy LoRA that targets the union of all target layers.

For more implementation details, take a look at the [`hotswap.py`](https://github.com/huggingface/peft/blob/92d65cafa51c829484ad3d95cf71d09de57ff066/src/peft/utils/hotswap.py) file.

</details>

## Merge

The weights from each LoRA can be merged together to produce a blend of multiple existing styles. There are several methods for merging LoRAs, each of which differ in *how* the weights are merged (may affect generation quality).

### set_adapters

The [set_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters) method merges LoRAs by concatenating their weighted matrices. Pass the LoRA names to [set_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters) and use the `adapter_weights` parameter to control the scaling of each LoRA. For example, if `adapter_weights=[0.5, 0.5]`, the output is an average of both LoRAs.

> [!TIP]
> The `"scale"` parameter determines how much of the merged LoRA to apply. See the [Weight scale](#weight-scale) section for more details.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    weight_name="fengzikai_v1.0_XL.safetensors",
    adapter_name="feng"
)
pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8])
# use by Feng Zikai to activate the lordjia/by-feng-zikai LoRA
pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", cross_attention_kwargs={"scale": 1.0}).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lora_merge_set_adapters.png"/>
</div>

### add_weighted_adapter

> [!TIP]
> This is an experimental method and you can refer to PEFTs [Model merging](https://huggingface.co/docs/peft/developer_guides/model_merging) for more details. Take a look at this [issue](https://github.com/huggingface/diffusers/issues/6892) if you're interested in the motivation and design behind this integration.

The `~peft.LoraModel.add_weighted_adapter` method enables more efficient merging methods like [TIES](https://huggingface.co/papers/2306.01708) or [DARE](https://huggingface.co/papers/2311.03099). These merging methods remove redundant and potentially interfering parameters from merged models. Keep in mind the LoRA ranks need to have identical ranks to be merged.

Make sure the latest stable version of Diffusers and PEFT is installed.

```bash
pip install -U -q diffusers peft
```

Load a UNET that corresponds to the LoRA UNet.

```py
import copy
import torch
from diffusers import AutoModel, DiffusionPipeline
from peft import get_peft_model, LoraConfig, PeftModel

unet = AutoModel.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
    subfolder="unet",
).to("cuda")
```

Load a pipeline, pass the UNet to it, and load a LoRA.

```py
pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    variant="fp16",
    torch_dtype=torch.float16,
    unet=unet
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
```

Create a `~peft.PeftModel` from the LoRA checkpoint by combining the first UNet you loaded and the LoRA UNet from the pipeline.

```py
sdxl_unet = copy.deepcopy(unet)
ikea_peft_model = get_peft_model(
    sdxl_unet,
    pipeline.unet.peft_config["ikea"],
    adapter_name="ikea"
)

original_state_dict = {f"base_model.model.{k}": v for k, v in pipeline.unet.state_dict().items()}
ikea_peft_model.load_state_dict(original_state_dict, strict=True)
```

> [!TIP]
> You can save and reuse the `ikea_peft_model` by pushing it to the Hub as shown below.
> ```py
> ikea_peft_model.push_to_hub("ikea_peft_model", token=TOKEN)
> ```

Repeat this process and create a `~peft.PeftModel` for the second LoRA.

```py
pipeline.delete_adapters("ikea")
sdxl_unet.delete_adapters("ikea")

pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    weight_name="fengzikai_v1.0_XL.safetensors",
    adapter_name="feng"
)
pipeline.set_adapters(adapter_names="feng")

feng_peft_model = get_peft_model(
    sdxl_unet,
    pipeline.unet.peft_config["feng"],
    adapter_name="feng"
)

original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()}
feng_peft_model.load_state_dict(original_state_dict, strict=True)
```

Load a base UNet model and load the adapters.

```py
base_unet = AutoModel.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
    subfolder="unet",
).to("cuda")

model = PeftModel.from_pretrained(
    base_unet,
    "stevhliu/ikea_peft_model",
    use_safetensors=True,
    subfolder="ikea",
    adapter_name="ikea"
)
model.load_adapter(
    "stevhliu/feng_peft_model",
    use_safetensors=True,
    subfolder="feng",
    adapter_name="feng"
)
```

Merge the LoRAs with `~peft.LoraModel.add_weighted_adapter` and specify how you want to merge them with `combination_type`. The example below uses the `"dare_linear"` method (refer to this [blog post](https://huggingface.co/blog/peft_merging) to learn more about these merging methods), which randomly prunes some weights and then performs a weighted sum of the tensors based on the set weightage of each LoRA in `weights`.

Activate the merged LoRAs with [set_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters).

```py
model.add_weighted_adapter(
    adapters=["ikea", "feng"],
    combination_type="dare_linear",
    weights=[1.0, 1.0],
    adapter_name="ikea-feng"
)
model.set_adapters("ikea-feng")

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    unet=model,
    variant="fp16",
    torch_dtype=torch.float16,
).to("cuda")
pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai").images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ikea-feng-dare-linear.png"/>
</div>

### fuse_lora

The [fuse_lora()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora) method fuses the LoRA weights directly with the original UNet and text encoder weights of the underlying model. This reduces the overhead of loading the underlying model for each LoRA because it only loads the model once, which lowers memory usage and increases inference speed.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    weight_name="fengzikai_v1.0_XL.safetensors",
    adapter_name="feng"
)
pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8])
```

Call [fuse_lora()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.fuse_lora) to fuse them. The `lora_scale` parameter controls how much to scale the output by with the LoRA weights. It is important to make this adjustment now because passing `scale` to `cross_attention_kwargs` won't work in the pipeline.

```py
pipeline.fuse_lora(adapter_names=["ikea", "feng"], lora_scale=1.0)
```

Unload the LoRA weights since they're already fused with the underlying model. Save the fused pipeline with either [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained) to save it locally or `~PushToHubMixin.push_to_hub` to save it to the Hub.

<hfoptions id="save">
<hfoption id="save locally">

```py
pipeline.unload_lora_weights()
pipeline.save_pretrained("path/to/fused-pipeline")
```

</hfoption>
<hfoption id="save to Hub">

```py
pipeline.unload_lora_weights()
pipeline.push_to_hub("fused-ikea-feng")
```

</hfoption>
</hfoptions>

The fused pipeline can now be quickly loaded for inference without requiring each LoRA to be separately loaded.

```py
pipeline = DiffusionPipeline.from_pretrained(
    "username/fused-ikea-feng", torch_dtype=torch.float16,
).to("cuda")
pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai").images[0]
```

Use `unfuse_lora()` to restore the underlying models weights, for example, if you want to use a different `lora_scale` value. You can only unfuse if there is a single LoRA fused. For example, it won't work with the pipeline from above because there are multiple fused LoRAs. In these cases, you'll need to reload the entire model.

```py
pipeline.unfuse_lora()
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/fuse_lora.png"/>
</div>

## Manage

Diffusers provides several methods to help you manage working with LoRAs. These methods can be especially useful if you're working with multiple LoRAs.

### set_adapters

[set_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.set_adapters) also activates the current LoRA to use if there are multiple active LoRAs. This allows you to switch between different LoRAs by specifying their name.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights(
    "ostris/ikea-instructions-lora-sdxl",
    weight_name="ikea_instructions_xl_v1_5.safetensors",
    adapter_name="ikea"
)
pipeline.load_lora_weights(
    "lordjia/by-feng-zikai",
    weight_name="fengzikai_v1.0_XL.safetensors",
    adapter_name="feng"
)
# activates the feng LoRA instead of the ikea LoRA
pipeline.set_adapters("feng")
```

### save_lora_adapter

Save an adapter with [save_lora_adapter()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.save_lora_adapter).

```py
import torch
from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")
pipeline.unet.load_lora_adapter(
    "jbilcke-hf/sdxl-cinematic-1",
    weight_name="pytorch_lora_weights.safetensors",
    adapter_name="cinematic"
    prefix="unet"
)
pipeline.save_lora_adapter("path/to/save", adapter_name="cinematic")
```

### unload_lora_weights

The [unload_lora_weights()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.unload_lora_weights) method unloads any LoRA weights in the pipeline to restore the underlying model weights.

```py
pipeline.unload_lora_weights()
```

### disable_lora

The [disable_lora()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.disable_lora) method disables all LoRAs (but they're still kept on the pipeline) and restores the pipeline to the underlying model weights.

```py
pipeline.disable_lora()
```

### get_active_adapters

The [get_active_adapters()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.get_active_adapters) method returns a list of active LoRAs attached to a pipeline.

```py
pipeline.get_active_adapters()
["cereal", "ikea"]
```

### get_list_adapters

The [get_list_adapters()](/docs/diffusers/main/en/api/loaders/lora#diffusers.loaders.lora_base.LoraBaseMixin.get_list_adapters) method returns the active LoRAs for each component in the pipeline.

```py
pipeline.get_list_adapters()
{"unet": ["cereal", "ikea"], "text_encoder_2": ["cereal"]}
```

### delete_adapters

The [delete_adapters()](/docs/diffusers/main/en/api/loaders/peft#diffusers.loaders.PeftAdapterMixin.delete_adapters) method completely removes a LoRA and its layers from a model.

```py
pipeline.delete_adapters("ikea")
```

## Resources

Browse the [LoRA Studio](https://lorastudio.co/models) for different LoRAs to use or you can upload your favorite LoRAs from Civitai to the Hub with the Space below.

<iframe
	src="https://multimodalart-civitai-to-hf.hf.space"
	frameborder="0"
	width="850"
	height="450"
></iframe>

You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.

Check out the [Fast LoRA inference for Flux with Diffusers and PEFT](https://huggingface.co/blog/lora-fast) blog post to learn how to optimize LoRA inference with methods like FlashAttention-3 and fp8 quantization.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/tutorials/using_peft_for_inference.md" />

### AutoPipeline
https://huggingface.co/docs/diffusers/main/tutorials/autopipeline.md

# AutoPipeline

[AutoPipeline](../api/models/auto_model) is a *task-and-model* pipeline that automatically selects the correct pipeline subclass based on the task. It handles the complexity of loading different pipeline subclasses without needing to know the specific pipeline subclass name.

This is unlike [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline), a *model-only* pipeline that automatically selects the pipeline subclass based on the model.

[AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) returns a specific pipeline subclass, (for example, [StableDiffusionXLImg2ImgPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLImg2ImgPipeline)), which can only be used for image-to-image tasks.

```py
import torch
from diffusers import AutoPipelineForImage2Image

pipeline = AutoPipelineForImage2Image.from_pretrained(
  "RunDiffusion/Juggernaut-XL-v9", torch_dtype=torch.bfloat16, device_map="cuda",
)
print(pipeline)
"StableDiffusionXLImg2ImgPipeline {
  "_class_name": "StableDiffusionXLImg2ImgPipeline",
  ...
"
```

Loading the same model with [DiffusionPipeline](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline) returns the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) subclass. It can be used for text-to-image, image-to-image, or inpainting tasks depending on the inputs.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
  "RunDiffusion/Juggernaut-XL-v9", torch_dtype=torch.bfloat16, device_map="cuda",
)
print(pipeline)
"StableDiffusionXLPipeline {
  "_class_name": "StableDiffusionXLPipeline",
  ...
"
```

Check the [mappings](https://github.com/huggingface/diffusers/blob/130fd8df54f24ffb006d84787b598d8adc899f23/src/diffusers/pipelines/auto_pipeline.py#L114) to see whether a model is supported or not.

Trying to load an unsupported model returns an error.

```py
import torch
from diffusers import AutoPipelineForImage2Image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "openai/shap-e-img2img", torch_dtype=torch.float16,
)
"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None"
```

There are three types of [AutoPipeline](../api/models/auto_model) classes, [AutoPipelineForText2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image), [AutoPipelineForImage2Image](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForImage2Image) and [AutoPipelineForInpainting](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForInpainting). Each of these classes have a predefined mapping, linking a pipeline to their task-specific subclass.

When [from_pretrained()](/docs/diffusers/main/en/api/pipelines/auto_pipeline#diffusers.AutoPipelineForText2Image.from_pretrained) is called, it extracts the class name from the `model_index.json` file and selects the appropriate pipeline subclass for the task based on the mapping.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/tutorials/autopipeline.md" />

### Train a diffusion model
https://huggingface.co/docs/diffusers/main/tutorials/basic_training.md

# Train a diffusion model

Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own!

This tutorial will teach you how to train a [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel) from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own 🦋 butterflies 🦋.

> [!TIP]
> 💡 This training tutorial is based on the [Training with 🧨 Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook!

Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training).

```py
# uncomment to install the necessary libraries in Colab
#!pip install diffusers[training]
```

We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role.

```py
>>> from huggingface_hub import notebook_login

>>> notebook_login()
```

Or login in from the terminal:

```bash
hf auth login
```

Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:

```bash
!sudo apt -qq install git-lfs
!git config --global credential.helper store
```

## Training configuration

For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them):

```py
>>> from dataclasses import dataclass

>>> @dataclass
... class TrainingConfig:
...     image_size = 128  # the generated image resolution
...     train_batch_size = 16
...     eval_batch_size = 16  # how many images to sample during evaluation
...     num_epochs = 50
...     gradient_accumulation_steps = 1
...     learning_rate = 1e-4
...     lr_warmup_steps = 500
...     save_image_epochs = 10
...     save_model_epochs = 30
...     mixed_precision = "fp16"  # `no` for float32, `fp16` for automatic mixed precision
...     output_dir = "ddpm-butterflies-128"  # the model name locally and on the HF Hub

...     push_to_hub = True  # whether to upload the saved model to the HF Hub
...     hub_model_id = "<your-username>/<my-awesome-model>"  # the name of the repository to create on the HF Hub
...     hub_private_repo = None
...     overwrite_output_dir = True  # overwrite the old model when re-running the notebook
...     seed = 0


>>> config = TrainingConfig()
```

## Load the dataset

You can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset with the 🤗 Datasets library:

```py
>>> from datasets import load_dataset

>>> config.dataset_name = "huggan/smithsonian_butterflies_subset"
>>> dataset = load_dataset(config.dataset_name, split="train")
```

> [!TIP]
> 💡 You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images.

🤗 Datasets uses the [Image](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Image) feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize:

```py
>>> import matplotlib.pyplot as plt

>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4))
>>> for i, image in enumerate(dataset[:4]["image"]):
...     axs[i].imshow(image)
...     axs[i].set_axis_off()
>>> fig.show()
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_ds.png"/>
</div>

The images are all different sizes though, so you'll need to preprocess them first:

* `Resize` changes the image size to the one defined in `config.image_size`.
* `RandomHorizontalFlip` augments the dataset by randomly mirroring the images.
* `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects.

```py
>>> from torchvision import transforms

>>> preprocess = transforms.Compose(
...     [
...         transforms.Resize((config.image_size, config.image_size)),
...         transforms.RandomHorizontalFlip(),
...         transforms.ToTensor(),
...         transforms.Normalize([0.5], [0.5]),
...     ]
... )
```

Use 🤗 Datasets' [set_transform](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.set_transform) method to apply the `preprocess` function on the fly during training:

```py
>>> def transform(examples):
...     images = [preprocess(image.convert("RGB")) for image in examples["image"]]
...     return {"images": images}


>>> dataset.set_transform(transform)
```

Feel free to visualize the images again to confirm that they've been resized. Now you're ready to wrap the dataset in a [DataLoader](https://pytorch.org/docs/stable/data#torch.utils.data.DataLoader) for training!

```py
>>> import torch

>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)
```

## Create a UNet2DModel

Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a [UNet2DModel](/docs/diffusers/main/en/api/models/unet2d#diffusers.UNet2DModel):

```py
>>> from diffusers import UNet2DModel

>>> model = UNet2DModel(
...     sample_size=config.image_size,  # the target image resolution
...     in_channels=3,  # the number of input channels, 3 for RGB images
...     out_channels=3,  # the number of output channels
...     layers_per_block=2,  # how many ResNet layers to use per UNet block
...     block_out_channels=(128, 128, 256, 256, 512, 512),  # the number of output channels for each UNet block
...     down_block_types=(
...         "DownBlock2D",  # a regular ResNet downsampling block
...         "DownBlock2D",
...         "DownBlock2D",
...         "DownBlock2D",
...         "AttnDownBlock2D",  # a ResNet downsampling block with spatial self-attention
...         "DownBlock2D",
...     ),
...     up_block_types=(
...         "UpBlock2D",  # a regular ResNet upsampling block
...         "AttnUpBlock2D",  # a ResNet upsampling block with spatial self-attention
...         "UpBlock2D",
...         "UpBlock2D",
...         "UpBlock2D",
...         "UpBlock2D",
...     ),
... )
```

It is often a good idea to quickly check the sample image shape matches the model output shape:

```py
>>> sample_image = dataset[0]["images"].unsqueeze(0)
>>> print("Input shape:", sample_image.shape)
Input shape: torch.Size([1, 3, 128, 128])

>>> print("Output shape:", model(sample_image, timestep=0).sample.shape)
Output shape: torch.Size([1, 3, 128, 128])
```

Great! Next, you'll need a scheduler to add some noise to the image.

## Create a scheduler

The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*.

Let's take a look at the [DDPMScheduler](/docs/diffusers/main/en/api/schedulers/ddpm#diffusers.DDPMScheduler) and use the `add_noise` method to add some random noise to the `sample_image` from before:

```py
>>> import torch
>>> from PIL import Image
>>> from diffusers import DDPMScheduler

>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
>>> noise = torch.randn(sample_image.shape)
>>> timesteps = torch.LongTensor([50])
>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)

>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/noisy_butterfly.png"/>
</div>

The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:

```py
>>> import torch.nn.functional as F

>>> noise_pred = model(noisy_image, timesteps).sample
>>> loss = F.mse_loss(noise_pred, noise)
```

## Train the model

By now, you have most of the pieces to start training the model and all that's left is putting everything together.

First, you'll need an optimizer and a learning rate scheduler:

```py
>>> from diffusers.optimization import get_cosine_schedule_with_warmup

>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate)
>>> lr_scheduler = get_cosine_schedule_with_warmup(
...     optimizer=optimizer,
...     num_warmup_steps=config.lr_warmup_steps,
...     num_training_steps=(len(train_dataloader) * config.num_epochs),
... )
```

Then, you'll need a way to evaluate the model. For evaluation, you can use the [DDPMPipeline](/docs/diffusers/main/en/api/pipelines/ddpm#diffusers.DDPMPipeline) to generate a batch of sample images and save it as a grid:

```py
>>> from diffusers import DDPMPipeline
>>> from diffusers.utils import make_image_grid
>>> import os

>>> def evaluate(config, epoch, pipeline):
...     # Sample some images from random noise (this is the backward diffusion process).
...     # The default pipeline output type is `List[PIL.Image]`
...     images = pipeline(
...         batch_size=config.eval_batch_size,
...         generator=torch.Generator(device='cpu').manual_seed(config.seed), # Use a separate torch generator to avoid rewinding the random state of the main training loop
...     ).images

...     # Make a grid out of the images
...     image_grid = make_image_grid(images, rows=4, cols=4)

...     # Save the images
...     test_dir = os.path.join(config.output_dir, "samples")
...     os.makedirs(test_dir, exist_ok=True)
...     image_grid.save(f"{test_dir}/{epoch:04d}.png")
```

Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub.

> [!TIP]
> 💡 The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. 🤗

```py
>>> from accelerate import Accelerator
>>> from huggingface_hub import create_repo, upload_folder
>>> from tqdm.auto import tqdm
>>> from pathlib import Path
>>> import os

>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler):
...     # Initialize accelerator and tensorboard logging
...     accelerator = Accelerator(
...         mixed_precision=config.mixed_precision,
...         gradient_accumulation_steps=config.gradient_accumulation_steps,
...         log_with="tensorboard",
...         project_dir=os.path.join(config.output_dir, "logs"),
...     )
...     if accelerator.is_main_process:
...         if config.output_dir is not None:
...             os.makedirs(config.output_dir, exist_ok=True)
...         if config.push_to_hub:
...             repo_id = create_repo(
...                 repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True
...             ).repo_id
...         accelerator.init_trackers("train_example")

...     # Prepare everything
...     # There is no specific order to remember, you just need to unpack the
...     # objects in the same order you gave them to the prepare method.
...     model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
...         model, optimizer, train_dataloader, lr_scheduler
...     )

...     global_step = 0

...     # Now you train the model
...     for epoch in range(config.num_epochs):
...         progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process)
...         progress_bar.set_description(f"Epoch {epoch}")

...         for step, batch in enumerate(train_dataloader):
...             clean_images = batch["images"]
...             # Sample noise to add to the images
...             noise = torch.randn(clean_images.shape, device=clean_images.device)
...             bs = clean_images.shape[0]

...             # Sample a random timestep for each image
...             timesteps = torch.randint(
...                 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device,
...                 dtype=torch.int64
...             )

...             # Add noise to the clean images according to the noise magnitude at each timestep
...             # (this is the forward diffusion process)
...             noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)

...             with accelerator.accumulate(model):
...                 # Predict the noise residual
...                 noise_pred = model(noisy_images, timesteps, return_dict=False)[0]
...                 loss = F.mse_loss(noise_pred, noise)
...                 accelerator.backward(loss)

...                 if accelerator.sync_gradients:
...                     accelerator.clip_grad_norm_(model.parameters(), 1.0)
...                 optimizer.step()
...                 lr_scheduler.step()
...                 optimizer.zero_grad()

...             progress_bar.update(1)
...             logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step}
...             progress_bar.set_postfix(**logs)
...             accelerator.log(logs, step=global_step)
...             global_step += 1

...         # After each epoch you optionally sample some demo images with evaluate() and save the model
...         if accelerator.is_main_process:
...             pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler)

...             if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1:
...                 evaluate(config, epoch, pipeline)

...             if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1:
...                 if config.push_to_hub:
...                     upload_folder(
...                         repo_id=repo_id,
...                         folder_path=config.output_dir,
...                         commit_message=f"Epoch {epoch}",
...                         ignore_patterns=["step_*", "epoch_*"],
...                     )
...                 else:
...                     pipeline.save_pretrained(config.output_dir)
```

Phew, that was quite a bit of code! But you're finally ready to launch the training with 🤗 Accelerate's [notebook_launcher](https://huggingface.co/docs/accelerate/main/en/package_reference/launchers#accelerate.notebook_launcher) function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training:

```py
>>> from accelerate import notebook_launcher

>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler)

>>> notebook_launcher(train_loop, args, num_processes=1)
```

Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model!

```py
>>> import glob

>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png"))
>>> Image.open(sample_images[-1])
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_final.png"/>
</div>

## Next steps

Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the [🧨 Diffusers Training Examples](../training/overview) page. Here are some examples of what you can learn:

* [Textual Inversion](../training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image.
* [DreamBooth](../training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject.
* [Guide](../training/text2image) to finetuning a Stable Diffusion model on your own dataset.
* [Guide](../training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/tutorials/basic_training.md" />

### Intel Gaudi
https://huggingface.co/docs/diffusers/main/optimization/habana.md

# Intel Gaudi

The Intel Gaudi AI accelerator family includes [Intel Gaudi 1](https://habana.ai/products/gaudi/), [Intel Gaudi 2](https://habana.ai/products/gaudi2/), and [Intel Gaudi 3](https://habana.ai/products/gaudi3/). Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the [Gaudi Architecture](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html) overview.

Diffusers pipelines can take advantage of HPU acceleration, even if a pipeline hasn't been added to [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index) yet, with the [GPU Migration Toolkit](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Model_Porting/GPU_Migration_Toolkit/GPU_Migration_Toolkit.html).

Call `.to("hpu")` on your pipeline to move it to a HPU device as shown below for Flux:
```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
pipeline.to("hpu")

image = pipeline("An image of a squirrel in Picasso style").images[0]
```

> [!TIP]
> For Gaudi-optimized diffusion pipeline implementations, we recommend using [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index).


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/habana.md" />

### Token merging
https://huggingface.co/docs/diffusers/main/optimization/tome.md

# Token merging

[Token merging](https://huggingface.co/papers/2303.17604) (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline).

Install ToMe from `pip`:

```bash
pip install tomesd
```

You can use ToMe from the [`tomesd`](https://github.com/dbolya/tomesd) library with the [`apply_patch`](https://github.com/dbolya/tomesd?tab=readme-ov-file#usage) function:

```diff
  from diffusers import StableDiffusionPipeline
  import torch
  import tomesd

  pipeline = StableDiffusionPipeline.from_pretrained(
        "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
  ).to("cuda")
+ tomesd.apply_patch(pipeline, ratio=0.5)

  image = pipeline("a photo of an astronaut riding a horse on mars").images[0]
```

The `apply_patch` function exposes a number of [arguments](https://github.com/dbolya/tomesd#usage) to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is `ratio` which controls the number of tokens that are merged during the forward pass.

As reported in the [paper](https://huggingface.co/papers/2303.17604), ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the `ratio`, you can speed-up inference even further, but at the cost of some degraded image quality.

To test the quality of the generated images, we sampled a few prompts from [Parti Prompts](https://parti.research.google/) and performed inference with the [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) with the following settings:

<div class="flex justify-center">
      <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/tome/tome_samples.png">
</div>

We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this [WandB report](https://wandb.ai/sayakpaul/tomesd-results/runs/23j4bj3i?workspace=). If you're interested in reproducing this experiment, use this [script](https://gist.github.com/sayakpaul/8cac98d7f22399085a060992f411ecbd).

## Benchmarks

We also benchmarked the impact of `tomesd` on the [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) with [xFormers](https://huggingface.co/docs/diffusers/optimization/xformers) enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment:

```bash
- `diffusers` version: 0.15.1
- Python version: 3.8.16
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Huggingface_hub version: 0.13.2
- Transformers version: 4.27.2
- Accelerate version: 0.18.0
- xFormers version: 0.0.16
- tomesd version: 0.1.2
```

To reproduce this benchmark, feel free to use this [script](https://gist.github.com/sayakpaul/27aec6bca7eb7b0e0aa4112205850335). The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers.

| **GPU**  | **Resolution** | **Batch size** | **Vanilla** | **ToMe**       | **ToMe + xFormers** |
|----------|----------------|----------------|-------------|----------------|---------------------|
| **A100** |            512 |             10 |        6.88 | 5.26 (+23.55%) |      4.69 (+31.83%) |
|          |            768 |             10 |         OOM |          14.71 |                  11 |
|          |                |              8 |         OOM |          11.56 |                8.84 |
|          |                |              4 |         OOM |           5.98 |                4.66 |
|          |                |              2 |        4.99 | 3.24 (+35.07%) |       2.1 (+37.88%) |
|          |                |              1 |        3.29 | 2.24 (+31.91%) |       2.03 (+38.3%) |
|          |           1024 |             10 |         OOM |            OOM |                 OOM |
|          |                |              8 |         OOM |            OOM |                 OOM |
|          |                |              4 |         OOM |          12.51 |                9.09 |
|          |                |              2 |         OOM |           6.52 |                4.96 |
|          |                |              1 |         6.4 | 3.61 (+43.59%) |      2.81 (+56.09%) |
| **V100** |            512 |             10 |         OOM |          10.03 |                9.29 |
|          |                |              8 |         OOM |           8.05 |                7.47 |
|          |                |              4 |         5.7 |  4.3 (+24.56%) |      3.98 (+30.18%) |
|          |                |              2 |        3.14 | 2.43 (+22.61%) |      2.27 (+27.71%) |
|          |                |              1 |        1.88 | 1.57 (+16.49%) |      1.57 (+16.49%) |
|          |            768 |             10 |         OOM |            OOM |               23.67 |
|          |                |              8 |         OOM |            OOM |               18.81 |
|          |                |              4 |         OOM |          11.81 |                 9.7 |
|          |                |              2 |         OOM |           6.27 |                 5.2 |
|          |                |              1 |        5.43 | 3.38 (+37.75%) |      2.82 (+48.07%) |
|          |           1024 |             10 |         OOM |            OOM |                 OOM |
|          |                |              8 |         OOM |            OOM |                 OOM |
|          |                |              4 |         OOM |            OOM |               19.35 |
|          |                |              2 |         OOM |             13 |               10.78 |
|          |                |              1 |         OOM |           6.66 |                5.54 |

As seen in the tables above, the speed-up from `tomesd` becomes more pronounced for larger image resolutions. It is also interesting to note that with `tomesd`, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with [`torch.compile`](fp16#torchcompile).


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/tome.md" />

### xDiT
https://huggingface.co/docs/diffusers/main/optimization/xdit.md

# xDiT

[xDiT](https://github.com/xdit-project/xDiT) is an inference engine designed for the large scale parallel deployment of Diffusion Transformers (DiTs). xDiT provides a suite of efficient parallel approaches for Diffusion Models, as well as GPU kernel accelerations.

There are four parallel methods supported in xDiT, including [Unified Sequence Parallelism](https://huggingface.co/papers/2405.07719), [PipeFusion](https://huggingface.co/papers/2405.14430), CFG parallelism and data parallelism. The four parallel methods in xDiT can be configured in a hybrid manner, optimizing communication patterns to best suit the underlying network hardware.

Optimization orthogonal to parallelization focuses on accelerating single GPU performance. In addition to utilizing well-known Attention optimization libraries, we leverage compilation acceleration technologies such as torch.compile and onediff.

The overview of xDiT is shown as follows.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/methods/xdit_overview.png">
</div>
You can install xDiT using the following command:


```bash
pip install xfuser
```

Here's an example of using xDiT to accelerate inference of a Diffusers model.

```diff
 import torch
 from diffusers import StableDiffusion3Pipeline

 from xfuser import xFuserArgs, xDiTParallel
 from xfuser.config import FlexibleArgumentParser
 from xfuser.core.distributed import get_world_group

 def main():
+    parser = FlexibleArgumentParser(description="xFuser Arguments")
+    args = xFuserArgs.add_cli_args(parser).parse_args()
+    engine_args = xFuserArgs.from_cli_args(args)
+    engine_config, input_config = engine_args.create_config()

     local_rank = get_world_group().local_rank
     pipe = StableDiffusion3Pipeline.from_pretrained(
         pretrained_model_name_or_path=engine_config.model_config.model,
         torch_dtype=torch.float16,
     ).to(f"cuda:{local_rank}")
    
# do anything you want with pipeline here

+    pipe = xDiTParallel(pipe, engine_config, input_config)

     pipe(
         height=input_config.height,
         width=input_config.height,
         prompt=input_config.prompt,
         num_inference_steps=input_config.num_inference_steps,
         output_type=input_config.output_type,
         generator=torch.Generator(device="cuda").manual_seed(input_config.seed),
     )

+    if input_config.output_type == "pil":
+        pipe.save("results", "stable_diffusion_3")

if __name__ == "__main__":
    main()

```

As you can see, we only need to use xFuserArgs from xDiT to get configuration parameters, and pass these parameters along with the pipeline object from the Diffusers library into xDiTParallel to complete the parallelization of a specific pipeline in Diffusers.

xDiT runtime parameters can be viewed in the command line using `-h`, and you can refer to this [usage](https://github.com/xdit-project/xDiT?tab=readme-ov-file#2-usage) example for more details.

xDiT needs to be launched using torchrun to support its multi-node, multi-GPU parallel capabilities. For example, the following command can be used for 8-GPU parallel inference:

```bash
torchrun --nproc_per_node=8 ./inference.py --model models/FLUX.1-dev --data_parallel_degree 2 --ulysses_degree 2 --ring_degree 2 --prompt "A snowy mountain" "A small dog" --num_inference_steps 50
```

## Supported models

A subset of Diffusers models are supported in xDiT, such as Flux.1, Stable Diffusion 3, etc. The latest supported models can be found [here](https://github.com/xdit-project/xDiT?tab=readme-ov-file#-supported-dits).

## Benchmark
We tested different models on various machines, and here is some of the benchmark data.

### Flux.1-schnell
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/flux/Flux-2k-L40.png">
</div>


<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/flux/Flux-2K-A100.png">
</div>

### Stable Diffusion 3
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/sd3/L40-SD3.png">
</div>

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/sd3/A100-SD3.png">
</div>

### HunyuanDiT
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/L40-HunyuanDiT.png">
</div>

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/V100-HunyuanDiT.png">
</div>

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/T4-HunyuanDiT.png">
</div>

More detailed performance metric can be found on our [github page](https://github.com/xdit-project/xDiT?tab=readme-ov-file#perf).

## Reference

[xDiT-project](https://github.com/xdit-project/xDiT)

[USP: A Unified Sequence Parallelism Approach for Long Context Generative AI](https://huggingface.co/papers/2405.07719)

[PipeFusion: Displaced Patch Pipeline Parallelism for Inference of Diffusion Transformer Models](https://huggingface.co/papers/2405.14430)

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/xdit.md" />

### DeepCache
https://huggingface.co/docs/diffusers/main/optimization/deepcache.md

# DeepCache
[DeepCache](https://huggingface.co/papers/2312.00858) accelerates [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) and [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture.

Start by installing [DeepCache](https://github.com/horseee/DeepCache):
```bash
pip install DeepCache
```

Then load and enable the [`DeepCacheSDHelper`](https://github.com/horseee/DeepCache#usage):

```diff
  import torch
  from diffusers import StableDiffusionPipeline
  pipe = StableDiffusionPipeline.from_pretrained('stable-diffusion-v1-5/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda")

+ from DeepCache import DeepCacheSDHelper
+ helper = DeepCacheSDHelper(pipe=pipe)
+ helper.set_params(
+     cache_interval=3,
+     cache_branch_id=0,
+ )
+ helper.enable()

  image = pipe("a photo of an astronaut on a moon").images[0]
```

The `set_params` method accepts two arguments: `cache_interval` and `cache_branch_id`. `cache_interval` means the frequency of feature caching, specified as the number of steps between each cache operation. `cache_branch_id` identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes.
Opting for a lower `cache_branch_id` or a larger `cache_interval` can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the [paper](https://huggingface.co/papers/2312.00858)). Once those arguments are set, use the `enable` or `disable` methods to activate or deactivate the `DeepCacheSDHelper`.

<div class="flex justify-center">
    <img src="https://github.com/horseee/Diffusion_DeepCache/raw/master/static/images/example.png">
</div>

You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the [WandB report](https://wandb.ai/horseee/DeepCache/runs/jwlsqqgt?workspace=user-horseee). The prompts are randomly selected from the [MS-COCO 2017](https://cocodataset.org/#home) dataset.

## Benchmark

We tested how much faster DeepCache accelerates [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B).

| **Resolution** | **Batch size** | **Original** | **DeepCache(I=3, B=0)** | **DeepCache(I=5, B=0)** | **DeepCache(I=5, B=1)** |
|----------------|----------------|--------------|-------------------------|-------------------------|-------------------------|
|             512|               8|         15.96|              6.88(2.32x)|              5.03(3.18x)|              7.27(2.20x)|
|                |               4|          8.39|              3.60(2.33x)|              2.62(3.21x)|              3.75(2.24x)|
|                |               1|          2.61|              1.12(2.33x)|              0.81(3.24x)|              1.11(2.35x)|
|             768|               8|         43.58|             18.99(2.29x)|             13.96(3.12x)|             21.27(2.05x)|
|                |               4|         22.24|              9.67(2.30x)|              7.10(3.13x)|             10.74(2.07x)|
|                |               1|          6.33|              2.72(2.33x)|              1.97(3.21x)|              2.98(2.12x)|
|            1024|               8|        101.95|             45.57(2.24x)|             33.72(3.02x)|             53.00(1.92x)|
|                |               4|         49.25|             21.86(2.25x)|             16.19(3.04x)|             25.78(1.91x)|
|                |               1|         13.83|              6.07(2.28x)|              4.43(3.12x)|              7.15(1.93x)|


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/deepcache.md" />

### xFormers
https://huggingface.co/docs/diffusers/main/optimization/xformers.md

# xFormers

We recommend [xFormers](https://github.com/facebookresearch/xformers) for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption.

Install xFormers from `pip`:

```bash
pip install xformers
```

> [!TIP]
> The xFormers `pip` package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend [installing xFormers from the source](https://github.com/facebookresearch/xformers#installing-xformers).

After xFormers is installed, you can use `enable_xformers_memory_efficient_attention()` for faster inference and reduced memory consumption as shown in this [section](memory#memory-efficient-attention).

> [!WARNING]
> According to this [issue](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212), xFormers `v0.0.16` cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/xformers.md" />

### Accelerate inference
https://huggingface.co/docs/diffusers/main/optimization/fp16.md

# Accelerate inference

Diffusion models are slow at inference because generation is an iterative process where noise is gradually refined into an image or video over a certain number of "steps". To speedup this process, you can try experimenting with different [schedulers](../api/schedulers/overview), reduce the precision of the model weights for faster computations, use more memory-efficient attention mechanisms, and more.

Combine and use these techniques together to make inference faster than using any single technique on its own.

This guide will go over how to accelerate inference.

## Model data type

The precision and data type of the model weights affect inference speed because a higher precision requires more memory to load and more time to perform the computations. PyTorch loads model weights in float32 or full precision by default, so changing the data type is a simple way to quickly get faster inference.

<hfoptions id="dtypes">
<hfoption id="bfloat16">

bfloat16 is similar to float16 but it is more robust to numerical errors. Hardware support for bfloat16 varies, but most modern GPUs are capable of supporting bfloat16.

```py
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, num_inference_steps=30).images[0]
```

</hfoption>
<hfoption id="float16">

float16 is similar to bfloat16 but may be more prone to numerical errors.

```py
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, num_inference_steps=30).images[0]
```

</hfoption>
<hfoption id="TensorFloat-32">

[TensorFloat-32 (tf32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode is supported on NVIDIA Ampere GPUs and it computes the convolution and matrix multiplication operations in tf32. Storage and other operations are kept in float32. This enables significantly faster computations when combined with bfloat16 or float16.

PyTorch only enables tf32 mode for convolutions by default and you'll need to explicitly enable it for matrix multiplications.

```py
import torch
from diffusers import StableDiffusionXLPipeline

torch.backends.cuda.matmul.allow_tf32 = True

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, num_inference_steps=30).images[0]
```

Refer to the [mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#mixed-precision) docs for more details.

</hfoption>
</hfoptions>

## Scaled dot product attention

> [!TIP]
> Memory-efficient attention optimizes for inference speed *and* [memory usage](./memory#memory-efficient-attention)!

[Scaled dot product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) implements several attention backends, [FlashAttention](https://github.com/Dao-AILab/flash-attention), [xFormers](https://github.com/facebookresearch/xformers), and a native C++ implementation. It automatically selects the most optimal backend for your hardware.

SDPA is enabled by default if you're using PyTorch >= 2.0 and no additional changes are required to your code. You could try experimenting with other attention backends though if you'd like to choose your own. The example below uses the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to enable efficient attention.

```py
from torch.nn.attention import SDPBackend, sdpa_kernel
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"

with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):
  image = pipeline(prompt, num_inference_steps=30).images[0]
```

## torch.compile

[torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) accelerates inference by compiling PyTorch code and operations into optimized kernels. Diffusers typically compiles the more compute-intensive models like the UNet, transformer, or VAE.

Enable the following compiler settings for maximum speed (refer to the [full list](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py) for more options).

```py
import torch
from diffusers import StableDiffusionXLPipeline

torch._inductor.config.conv_1x1_as_mm = True
torch._inductor.config.coordinate_descent_tuning = True
torch._inductor.config.epilogue_fusion = False
torch._inductor.config.coordinate_descent_check_all_directions = True
```

Load and compile the UNet and VAE. There are several different modes you can choose from, but `"max-autotune"` optimizes for the fastest speed by compiling to a CUDA graph. CUDA graphs effectively reduces the overhead by launching multiple GPU operations through a single CPU operation.

> [!TIP]
> With PyTorch 2.3.1, you can control the caching behavior of torch.compile. This is particularly beneficial for compilation modes like `"max-autotune"` which performs a grid-search over several compilation flags to find the optimal configuration. Learn more in the [Compile Time Caching in torch.compile](https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html) tutorial.

Changing the memory layout to [channels_last](./memory#torchchannels_last) also optimizes memory and inference speed.

```py
pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.unet.to(memory_format=torch.channels_last)
pipeline.vae.to(memory_format=torch.channels_last)
pipeline.unet = torch.compile(
    pipeline.unet, mode="max-autotune", fullgraph=True
)
pipeline.vae.decode = torch.compile(
    pipeline.vae.decode,
    mode="max-autotune",
    fullgraph=True
)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, num_inference_steps=30).images[0]
```

Compilation is slow the first time, but once compiled, it is significantly faster. Try to only use the compiled pipeline on the same type of inference operations. Calling the compiled pipeline on a different image size retriggers compilation which is slow and inefficient.

### Dynamic shape compilation

> [!TIP]
> Make sure to always use the nightly version of PyTorch for better support.

`torch.compile` keeps track of input shapes and conditions, and if these are different, it recompiles the model. For example, if a model is compiled on a 1024x1024 resolution image and used on an image with a different resolution, it triggers recompilation.

To avoid recompilation, add `dynamic=True` to try and generate a more dynamic kernel to avoid recompilation when conditions change.

```diff
+ torch.fx.experimental._config.use_duck_shape = False
+ pipeline.unet = torch.compile(
    pipeline.unet, fullgraph=True, dynamic=True
)
```

Specifying `use_duck_shape=False` instructs the compiler if it should use the same symbolic variable to represent input sizes that are the same. For more details, check out this [comment](https://github.com/huggingface/diffusers/pull/11327#discussion_r2047659790).

Not all models may benefit from dynamic compilation out of the box and may require changes. Refer to this [PR](https://github.com/huggingface/diffusers/pull/11297/) that improved the [AuraFlowPipeline](/docs/diffusers/main/en/api/pipelines/aura_flow#diffusers.AuraFlowPipeline) implementation to benefit from dynamic compilation.

Feel free to open an issue if dynamic compilation doesn't work as expected for a Diffusers model.

### Regional compilation

[Regional compilation](https://docs.pytorch.org/tutorials/recipes/regional_compilation.html) trims cold-start latency by only compiling the *small and frequently-repeated block(s)* of a model - typically a transformer layer - and enables reusing compiled artifacts for every subsequent occurrence.
For many diffusion architectures, this delivers the same runtime speedups as full-graph compilation and reduces compile time by 8–10x.

Use the [compile_repeated_blocks()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.compile_repeated_blocks) method, a helper that wraps `torch.compile`, on any component such as the transformer model as shown below.

```py
# pip install -U diffusers
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
).to("cuda")

# compile only the repeated transformer layers inside the UNet
pipeline.unet.compile_repeated_blocks(fullgraph=True)
```

To enable regional compilation for a new model, add a `_repeated_blocks` attribute to a model class containing the class names (as strings) of the blocks you want to compile.

```py
class MyUNet(ModelMixin):
    _repeated_blocks = ("Transformer2DModel",)  # ← compiled by default
```

> [!TIP]
> For more regional compilation examples, see the reference [PR](https://github.com/huggingface/diffusers/pull/11705).

There is also a [compile_regions](https://github.com/huggingface/accelerate/blob/273799c85d849a1954a4f2e65767216eb37fa089/src/accelerate/utils/other.py#L78) method in [Accelerate](https://huggingface.co/docs/accelerate/index) that automatically selects candidate blocks in a model to compile. The remaining graph is compiled separately. This is useful for quick experiments because there aren't as many options for you to set which blocks to compile or adjust compilation flags.

```py
# pip install -U accelerate
import torch
from diffusers import StableDiffusionXLPipeline
from accelerate.utils import compile_regions

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.unet = compile_regions(pipeline.unet, mode="reduce-overhead", fullgraph=True)
```

[compile_repeated_blocks()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.compile_repeated_blocks) is intentionally explicit. List the blocks to repeat in `_repeated_blocks` and the helper only compiles those blocks. It offers predictable behavior and easy reasoning about cache reuse in one line of code.

### Graph breaks

It is important to specify `fullgraph=True` in torch.compile to ensure there are no graph breaks in the underlying model. This allows you to take advantage of torch.compile without any performance degradation. For the UNet and VAE, this changes how you access the return variables.

```diff
- latents = unet(
-   latents, timestep=timestep, encoder_hidden_states=prompt_embeds
-).sample

+ latents = unet(
+   latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False
+)[0]
```

### GPU sync

The `step()` function is [called](https://github.com/huggingface/diffusers/blob/1d686bac8146037e97f3fd8c56e4063230f71751/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L1228) on the scheduler each time after the denoiser makes a prediction, and the `sigmas` variable is [indexed](https://github.com/huggingface/diffusers/blob/1d686bac8146037e97f3fd8c56e4063230f71751/src/diffusers/schedulers/scheduling_euler_discrete.py#L476). When placed on the GPU, it introduces latency because of the communication sync between the CPU and GPU. It becomes more evident when the denoiser has already been compiled.

In general, the `sigmas` should [stay on the CPU](https://github.com/huggingface/diffusers/blob/35a969d297cba69110d175ee79c59312b9f49e1e/src/diffusers/schedulers/scheduling_euler_discrete.py#L240) to avoid the communication sync and latency.

> [!TIP]
> Refer to the [torch.compile and Diffusers: A Hands-On Guide to Peak Performance](https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/) blog post for maximizing performance with `torch.compile` for diffusion models.

### Benchmarks

Refer to the [diffusers/benchmarks](https://huggingface.co/datasets/diffusers/benchmarks) dataset to see inference latency and memory usage data for compiled pipelines.

The [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao#benchmarking-results) repository also contains benchmarking results for compiled versions of Flux and CogVideoX.

## Dynamic quantization

[Dynamic quantization](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html) improves inference speed by reducing precision to enable faster math operations. This particular type of quantization determines how to scale the activations based on the data at runtime rather than using a fixed scaling factor. As a result, the scaling factor is more accurately aligned with the data.

The example below applies [dynamic int8 quantization](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html) to the UNet and VAE with the [torchao](../quantization/torchao) library.

> [!TIP]
> Refer to our [torchao](../quantization/torchao) docs to learn more about how to use the Diffusers torchao integration.

Configure the compiler tags for maximum speed.

```py
import torch
from torchao import apply_dynamic_quant
from diffusers import StableDiffusionXLPipeline

torch._inductor.config.conv_1x1_as_mm = True
torch._inductor.config.coordinate_descent_tuning = True
torch._inductor.config.epilogue_fusion = False
torch._inductor.config.coordinate_descent_check_all_directions = True
torch._inductor.config.force_fuse_int_mm_with_mul = True
torch._inductor.config.use_mixed_mm = True
```

Filter out some linear layers in the UNet and VAE which don't benefit from dynamic quantization with the [dynamic_quant_filter_fn](https://github.com/huggingface/diffusion-fast/blob/0f169640b1db106fe6a479f78c1ed3bfaeba3386/utils/pipeline_utils.py#L16).

```py
pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")

apply_dynamic_quant(pipeline.unet, dynamic_quant_filter_fn)
apply_dynamic_quant(pipeline.vae, dynamic_quant_filter_fn)

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, num_inference_steps=30).images[0]
```

## Fused projection matrices

> [!WARNING]
> The [fuse_qkv_projections](https://github.com/huggingface/diffusers/blob/58431f102cf39c3c8a569f32d71b2ea8caa461e1/src/diffusers/pipelines/pipeline_utils.py#L2034) method is experimental and support is limited to mostly Stable Diffusion pipelines. Take a look at this [PR](https://github.com/huggingface/diffusers/pull/6179) to learn more about how to enable it for other pipelines

An input is projected into three subspaces, represented by the projection matrices Q, K, and V, in an attention block. These projections are typically calculated separately, but you can horizontally combine these into a single matrix and perform the projection in a single step. It increases the size of the matrix multiplications of the input projections and also improves the impact of quantization.

```py
pipeline.fuse_qkv_projections()
```

## Resources

- Read the [Presenting Flux Fast: Making Flux go brrr on H100s](https://pytorch.org/blog/presenting-flux-fast-making-flux-go-brrr-on-h100s/) blog post to learn more about how you can combine all of these optimizations with [TorchInductor](https://docs.pytorch.org/docs/stable/torch.compiler.html) and [AOTInductor](https://docs.pytorch.org/docs/stable/torch.compiler_aot_inductor.html) for a ~2.5x speedup using recipes from [flux-fast](https://github.com/huggingface/flux-fast).

    These recipes support AMD hardware and [Flux.1 Kontext Dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev).
- Read the [torch.compile and Diffusers: A Hands-On Guide to Peak Performance](https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/) blog post
to maximize performance when using `torch.compile`.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/fp16.md" />

### Pruna
https://huggingface.co/docs/diffusers/main/optimization/pruna.md

# Pruna

[Pruna](https://github.com/PrunaAI/pruna) is a model optimization framework that offers various optimization methods - quantization, pruning, caching, compilation - for accelerating inference and reducing memory usage. A general overview of the optimization methods are shown below.


| Technique    | Description                                                                                   | Speed | Memory | Quality |
|--------------|-----------------------------------------------------------------------------------------------|:-----:|:------:|:-------:|
| `batcher`    | Groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing processing time. | ✅    | ❌     | ➖      |
| `cacher`     | Stores intermediate results of computations to speed up subsequent operations.               | ✅    | ➖     | ➖      |
| `compiler`   | Optimises the model with instructions for specific hardware.                                 | ✅    | ➖     | ➖      |
| `distiller`  | Trains a smaller, simpler model to mimic a larger, more complex model.                       | ✅    | ✅     | ❌      |
| `quantizer`  | Reduces the precision of weights and activations, lowering memory requirements.              | ✅    | ✅     | ❌      |
| `pruner`     | Removes less important or redundant connections and neurons, resulting in a sparser, more efficient network. | ✅    | ✅     | ❌      |
| `recoverer`  | Restores the performance of a model after compression.                                       | ➖    | ➖     | ✅      |
| `factorizer` | Factorization batches several small matrix multiplications into one large fused operation. | ✅ | ➖ | ➖ |
| `enhancer`   | Enhances the model output by applying post-processing algorithms such as denoising or upscaling. | ❌ | - | ✅ |

✅ (improves), ➖ (approx. the same), ❌ (worsens)

Explore the full range of optimization methods in the [Pruna documentation](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html#configure-algorithms).

## Installation

Install Pruna with the following command.

```bash
pip install pruna
```


## Optimize Diffusers models

A broad range of optimization algorithms are supported for Diffusers models as shown below.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/diffusers_combinations.png" alt="Overview of the supported optimization algorithms for diffusers models">
</div>

The example below optimizes [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
with a combination of factorizer, compiler, and cacher algorithms. This combination accelerates inference by up to 4.2x and cuts peak GPU memory usage from 34.7GB to 28.0GB, all while maintaining virtually the same output quality.

> [!TIP]
> Refer to the [Pruna optimization](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html) docs to learn more about the optimization techniques used in this example.

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/flux_combination.png" alt="Optimization techniques used for FLUX.1-dev showing the combination of factorizer, compiler, and cacher algorithms">
</div>

Start by defining a `SmashConfig` with the optimization algorithms to use. To optimize the model, wrap the pipeline and the `SmashConfig` with `smash` and then use the pipeline as normal for inference.

```python
import torch
from diffusers import FluxPipeline

from pruna import PrunaModel, SmashConfig, smash

# load the model
# Try segmind/Segmind-Vega or black-forest-labs/FLUX.1-schnell with a small GPU memory
pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16
).to("cuda")

# define the configuration
smash_config = SmashConfig()
smash_config["factorizer"] = "qkv_diffusers"
smash_config["compiler"] = "torch_compile"
smash_config["torch_compile_target"] = "module_list"
smash_config["cacher"] = "fora"
smash_config["fora_interval"] = 2

# for the best results in terms of speed you can add these configs
# however they will increase your warmup time from 1.5 min to 10 min
# smash_config["torch_compile_mode"] = "max-autotune-no-cudagraphs"
# smash_config["quantizer"] = "torchao"
# smash_config["torchao_quant_type"] = "fp8dq"
# smash_config["torchao_excluded_modules"] = "norm+embedding"

# optimize the model
smashed_pipe = smash(pipe, smash_config)

# run the model
smashed_pipe("a knitted purple prune").images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/flux_smashed_comparison.png">
</div>

After optimization, we can share and load the optimized model using the Hugging Face Hub.

```python
# save the model
smashed_pipe.save_to_hub("<username>/FLUX.1-dev-smashed")

# load the model
smashed_pipe = PrunaModel.from_hub("<username>/FLUX.1-dev-smashed")
```

## Evaluate and benchmark Diffusers models

Pruna provides the [EvaluationAgent](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/evaluate.html) to evaluate the quality of your optimized models.

We can metrics we care about, such as total time and throughput, and the dataset to evaluate on. We can define a model and pass it to the `EvaluationAgent`.

<hfoptions id="eval">
<hfoption id="optimized model">

We can load and evaluate an optimized model by using the `EvaluationAgent` and pass it to the `Task`.

```python
import torch
from diffusers import FluxPipeline

from pruna import PrunaModel
from pruna.data.pruna_datamodule import PrunaDataModule
from pruna.evaluation.evaluation_agent import EvaluationAgent
from pruna.evaluation.metrics import (
    ThroughputMetric,
    TorchMetricWrapper,
    TotalTimeMetric,
)
from pruna.evaluation.task import Task

# define the device
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"

# load the model
# Try PrunaAI/Segmind-Vega-smashed or PrunaAI/FLUX.1-dev-smashed with a small GPU memory
smashed_pipe = PrunaModel.from_hub("PrunaAI/FLUX.1-dev-smashed")

# Define the metrics
metrics = [
    TotalTimeMetric(n_iterations=20, n_warmup_iterations=5),
    ThroughputMetric(n_iterations=20, n_warmup_iterations=5),
    TorchMetricWrapper("clip"),
]

# Define the datamodule
datamodule = PrunaDataModule.from_string("LAION256")
datamodule.limit_datasets(10)

# Define the task and evaluation agent
task = Task(metrics, datamodule=datamodule, device=device)
eval_agent = EvaluationAgent(task)

# Evaluate smashed model and offload it to CPU
smashed_pipe.move_to_device(device)
smashed_pipe_results = eval_agent.evaluate(smashed_pipe)
smashed_pipe.move_to_device("cpu")
```

</hfoption>
<hfoption id="standalone model">

Instead of comparing the optimized model to the base model, you can also evaluate the standalone `diffusers` model. This is useful if you want to evaluate the performance of the model without the optimization. We can do so by using the `PrunaModel` wrapper and run the `EvaluationAgent` on it.

```python
import torch
from diffusers import FluxPipeline

from pruna import PrunaModel

# load the model
# Try PrunaAI/Segmind-Vega-smashed or PrunaAI/FLUX.1-dev-smashed with a small GPU memory
pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16
).to("cpu")
wrapped_pipe = PrunaModel(model=pipe)
```

</hfoption>
</hfoptions>

Now that you have seen how to optimize and evaluate your models, you can start using Pruna to optimize your own models. Luckily, we have many examples to help you get started.

> [!TIP]
> For more details about benchmarking Flux, check out the [Announcing FLUX-Juiced: The Fastest Image Generation Endpoint (2.6 times faster)!](https://huggingface.co/blog/PrunaAI/flux-fastest-image-generation-endpoint) blog post and the [InferBench](https://huggingface.co/spaces/PrunaAI/InferBench) Space.

## Reference

- [Pruna](https://github.com/pruna-ai/pruna)
- [Pruna optimization](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html#configure-algorithms)
- [Pruna evaluation](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/evaluate.html)
- [Pruna tutorials](https://docs.pruna.ai/en/stable/docs_pruna/tutorials/index.html)



<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/pruna.md" />

### How to run Stable Diffusion with Core ML
https://huggingface.co/docs/diffusers/main/optimization/coreml.md

# How to run Stable Diffusion with Core ML

[Core ML](https://developer.apple.com/documentation/coreml) is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift.

Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it's running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example.

> [!TIP]
> You can also run the `diffusers` Python codebase on Apple Silicon Macs using the `mps` accelerator built into PyTorch. This approach is explained in depth in [the mps guide](mps), but it is not compatible with native apps.

## Stable Diffusion Core ML Checkpoints

Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps.

Thankfully, Apple engineers developed [a conversion tool](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) based on `diffusers` to convert the PyTorch checkpoints to Core ML.

Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you're interested in is already available in Core ML format:

- the [Apple](https://huggingface.co/apple) organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base
- [coreml community](https://huggingface.co/coreml-community) includes custom finetuned models
- use this [filter](https://huggingface.co/models?pipeline_tag=text-to-image&library=coreml&p=2&sort=likes) to return all available Core ML checkpoints

If you can't find the model you're interested in, we recommend you follow the instructions for [Converting Models to Core ML](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) by Apple.

## Selecting the Core ML Variant to Use

Stable Diffusion models can be converted to different Core ML variants intended for different purposes:

- The type of attention blocks used. The attention operation is used to "pay attention" to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants:
    * `split_einsum` ([introduced by Apple](https://machinelearning.apple.com/research/neural-engine-transformers)) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers.
    * The "original" attention (the base implementation used in `diffusers`) is only compatible with CPU/GPU and not ANE. It can be *faster* to run your model on CPU + GPU using `original` attention than ANE. See [this performance benchmark](https://huggingface.co/blog/fast-mac-diffusers#performance-benchmarks) as well as some [additional measures provided by the community](https://github.com/huggingface/swift-coreml-diffusers/issues/31) for additional details.

- The supported inference framework.
    * `packages` are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don't need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend.
    * `compiled` models are required for Swift code. The `compiled` models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the [`--chunk-unet` conversion option](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). If you want to support native apps, then you need to select the `compiled` variant.

The official Core ML Stable Diffusion [models](https://huggingface.co/apple/coreml-stable-diffusion-v1-4/tree/main) include these variants, but the community ones may vary:

```
coreml-stable-diffusion-v1-4
├── README.md
├── original
│   ├── compiled
│   └── packages
└── split_einsum
    ├── compiled
    └── packages
```

You can download and use the variant you need as shown below.

## Core ML Inference in Python

Install the following libraries to run Core ML inference in Python:

```bash
pip install huggingface_hub
pip install git+https://github.com/apple/ml-stable-diffusion
```

### Download the Model Checkpoints

To run inference in Python, use one of the versions stored in the `packages` folders because the `compiled` ones are only compatible with Swift. You may choose whether you want to use `original` or `split_einsum` attention.

This is how you'd download the `original` attention variant from the Hub to a directory called `models`:

```Python
from huggingface_hub import snapshot_download
from pathlib import Path

repo_id = "apple/coreml-stable-diffusion-v1-4"
variant = "original/packages"

model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)
print(f"Model downloaded at {model_path}")
```

### Inference[[python-inference]]

Once you have downloaded a snapshot of the model, you can test it using Apple's Python script.

```shell
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ./models/coreml-stable-diffusion-v1-4_original_packages/original/packages -o </path/to/output/image> --compute-unit CPU_AND_GPU --seed 93
```

Pass the path of the downloaded checkpoint with `-i` flag to the script. `--compute-unit` indicates the hardware you want to allow for inference. It must be one of the following options: `ALL`, `CPU_AND_GPU`, `CPU_ONLY`, `CPU_AND_NE`. You may also provide an optional output path, and a seed for reproducibility.

The inference script assumes you're using the original version of the Stable Diffusion model, `CompVis/stable-diffusion-v1-4`. If you use another model, you *have* to specify its Hub id in the inference command line, using the `--model-version` option. This works for models already supported and custom models you trained or fine-tuned yourself.

For example, if you want to use [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5):

```shell
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version stable-diffusion-v1-5/stable-diffusion-v1-5
```

## Core ML inference in Swift

Running inference in Swift is slightly faster than in Python because the models are already compiled in the `mlmodelc` format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward.

### Download

To run inference in Swift on your Mac, you need one of the `compiled` checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the `compiled` variants:

```Python
from huggingface_hub import snapshot_download
from pathlib import Path

repo_id = "apple/coreml-stable-diffusion-v1-4"
variant = "original/compiled"

model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)
print(f"Model downloaded at {model_path}")
```

### Inference[[swift-inference]]

To run inference, please clone Apple's repo:

```bash
git clone https://github.com/apple/ml-stable-diffusion
cd ml-stable-diffusion
```

And then use Apple's command line tool, [Swift Package Manager](https://www.swift.org/package-manager/#):

```bash
swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars"
```

You have to specify in `--resource-path` one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension `.mlmodelc`. The `--compute-units` has to be one of these values: `all`, `cpuOnly`, `cpuAndGPU`, `cpuAndNeuralEngine`.

For more details, please refer to the [instructions in Apple's repo](https://github.com/apple/ml-stable-diffusion).

## Supported Diffusers Features

The Core ML models and inference code don't support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind:

- Core ML models are only suitable for inference. They can't be used for training or fine-tuning.
- Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and `DPMSolverMultistepScheduler`, which we ported to Swift from our `diffusers` implementation. We recommend you use `DPMSolverMultistepScheduler`, since it produces the same quality in about half the steps.
- Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet.

Apple's [conversion and inference repo](https://github.com/apple/ml-stable-diffusion) and our own [swift-coreml-diffusers](https://github.com/huggingface/swift-coreml-diffusers) repos are intended as technology demonstrators to enable other developers to build upon.

If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂.

## Native Diffusers Swift app

One easy way to run Stable Diffusion on your own Apple hardware is to use [our open-source Swift repo](https://github.com/huggingface/swift-coreml-diffusers), based on `diffusers` and Apple's conversion and inference repo. You can study the code, compile it with [Xcode](https://developer.apple.com/xcode/) and adapt it for your own needs. For your convenience, there's also a [standalone Mac app in the App Store](https://apps.apple.com/app/diffusers/id1666309574), so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can't wait to see what you'll build 🙂.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/coreml.md" />

### T-GATE
https://huggingface.co/docs/diffusers/main/optimization/tgate.md

# T-GATE

[T-GATE](https://github.com/HaozheLiu-ST/T-GATE/tree/main) accelerates inference for [Stable Diffusion](../api/pipelines/stable_diffusion/overview), [PixArt](../api/pipelines/pixart), and [Latency Consistency Model](../api/pipelines/latent_consistency_models.md) pipelines by skipping the cross-attention calculation once it converges. This method doesn't require any additional training and it can speed up inference from 10-50%. T-GATE is also compatible with other optimization methods like [DeepCache](./deepcache).

Before you begin, make sure you install T-GATE.

```bash
pip install tgate
pip install -U torch diffusers transformers accelerate DeepCache
```


To use T-GATE with a pipeline, you need to use its corresponding loader.

| Pipeline | T-GATE Loader |
|---|---|
| PixArt | TgatePixArtLoader |
| Stable Diffusion XL | TgateSDXLLoader |
| Stable Diffusion XL + DeepCache | TgateSDXLDeepCacheLoader |
| Stable Diffusion | TgateSDLoader |
| Stable Diffusion + DeepCache | TgateSDDeepCacheLoader |

Next, create a `TgateLoader` with a pipeline, the gate step (the time step to stop calculating the cross attention), and the number of inference steps. Then call the `tgate` method on the pipeline with a prompt, gate step, and the number of inference steps.

Let's see how to enable this for several different pipelines.

<hfoptions id="pipelines">
<hfoption id="PixArt">

Accelerate `PixArtAlphaPipeline` with T-GATE:

```py
import torch
from diffusers import PixArtAlphaPipeline
from tgate import TgatePixArtLoader

pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16)

gate_step = 8
inference_step = 25
pipe = TgatePixArtLoader(
       pipe,
       gate_step=gate_step,
       num_inference_steps=inference_step,
).to("cuda")

image = pipe.tgate(
       "An alpaca made of colorful building blocks, cyberpunk.",
       gate_step=gate_step,
       num_inference_steps=inference_step,
).images[0]
```
</hfoption>
<hfoption id="Stable Diffusion XL">

Accelerate `StableDiffusionXLPipeline` with T-GATE:

```py
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepScheduler
from tgate import TgateSDXLLoader

pipe = StableDiffusionXLPipeline.from_pretrained(
            "stabilityai/stable-diffusion-xl-base-1.0",
            torch_dtype=torch.float16,
            variant="fp16",
            use_safetensors=True,
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

gate_step = 10
inference_step = 25
pipe = TgateSDXLLoader(
       pipe,
       gate_step=gate_step,
       num_inference_steps=inference_step,
).to("cuda")

image = pipe.tgate(
       "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
       gate_step=gate_step,
       num_inference_steps=inference_step
).images[0]
```
</hfoption>
<hfoption id="StableDiffusionXL with DeepCache">

Accelerate `StableDiffusionXLPipeline` with [DeepCache](https://github.com/horseee/DeepCache) and T-GATE:

```py
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepScheduler
from tgate import TgateSDXLDeepCacheLoader

pipe = StableDiffusionXLPipeline.from_pretrained(
            "stabilityai/stable-diffusion-xl-base-1.0",
            torch_dtype=torch.float16,
            variant="fp16",
            use_safetensors=True,
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

gate_step = 10
inference_step = 25
pipe = TgateSDXLDeepCacheLoader(
       pipe,
       cache_interval=3,
       cache_branch_id=0,
).to("cuda")

image = pipe.tgate(
       "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
       gate_step=gate_step,
       num_inference_steps=inference_step
).images[0]
```
</hfoption>
<hfoption id="Latent Consistency Model">

Accelerate `latent-consistency/lcm-sdxl` with T-GATE:

```py
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import UNet2DConditionModel, LCMScheduler
from diffusers import DPMSolverMultistepScheduler
from tgate import TgateSDXLLoader

unet = UNet2DConditionModel.from_pretrained(
    "latent-consistency/lcm-sdxl",
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    unet=unet,
    torch_dtype=torch.float16,
    variant="fp16",
)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

gate_step = 1
inference_step = 4
pipe = TgateSDXLLoader(
       pipe,
       gate_step=gate_step,
       num_inference_steps=inference_step,
       lcm=True
).to("cuda")

image = pipe.tgate(
       "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.",
       gate_step=gate_step,
       num_inference_steps=inference_step
).images[0]
```
</hfoption>
</hfoptions>

T-GATE also supports [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline) and [PixArt-alpha/PixArt-LCM-XL-2-1024-MS](https://hf.co/PixArt-alpha/PixArt-LCM-XL-2-1024-MS).

## Benchmarks
| Model                 | MACs     | Param     | Latency | Zero-shot 10K-FID on MS-COCO |
|-----------------------|----------|-----------|---------|---------------------------|
| SD-1.5                | 16.938T  | 859.520M  | 7.032s  | 23.927                    |
| SD-1.5 w/ T-GATE       | 9.875T   | 815.557M  | 4.313s  | 20.789                    |
| SD-2.1                | 38.041T  | 865.785M  | 16.121s | 22.609                    |
| SD-2.1 w/ T-GATE       | 22.208T  | 815.433 M | 9.878s  | 19.940                    |
| SD-XL                 | 149.438T | 2.570B    | 53.187s | 24.628                    |
| SD-XL w/ T-GATE        | 84.438T  | 2.024B    | 27.932s | 22.738                    |
| Pixart-Alpha          | 107.031T | 611.350M  | 61.502s | 38.669                    |
| Pixart-Alpha w/ T-GATE | 65.318T  | 462.585M  | 37.867s | 35.825                    |
| DeepCache (SD-XL)     | 57.888T  | -         | 19.931s | 23.755                    |
| DeepCache w/ T-GATE    | 43.868T  | -         | 14.666s | 23.999                    |
| LCM (SD-XL)           | 11.955T  | 2.570B    | 3.805s  | 25.044                    |
| LCM w/ T-GATE          | 11.171T  | 2.024B    | 3.533s  | 25.028                    |
| LCM (Pixart-Alpha)    | 8.563T   | 611.350M  | 4.733s  | 36.086                    |
| LCM w/ T-GATE          | 7.623T   | 462.585M  | 4.543s  | 37.048                    |

The latency is tested on an NVIDIA 1080TI, MACs and Params are calculated with [calflops](https://github.com/MrYxJ/calculate-flops.pytorch), and the FID is calculated with [PytorchFID](https://github.com/mseitzer/pytorch-fid).


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/tgate.md" />

### CacheDiT
https://huggingface.co/docs/diffusers/main/optimization/cache_dit.md

## CacheDiT  

CacheDiT is a unified, flexible, and training-free cache acceleration framework designed to support nearly all Diffusers' DiT-based pipelines. It provides a unified cache API that supports automatic block adapter, DBCache, and more.

To learn more, refer to the [CacheDiT](https://github.com/vipshop/cache-dit) repository.

Install a stable release of CacheDiT from PyPI or you can install the latest version from GitHub.

<hfoptions id="install">
<hfoption id="PyPI">

```bash
pip3 install -U cache-dit
```

</hfoption>
<hfoption id="source">

```bash
pip3 install git+https://github.com/vipshop/cache-dit.git
```

</hfoption>
</hfoptions>

Run the command below to view supported DiT pipelines.

```python
>>> import cache_dit
>>> cache_dit.supported_pipelines()
(30, ['Flux*', 'Mochi*', 'CogVideoX*', 'Wan*', 'HunyuanVideo*', 'QwenImage*', 'LTX*', 'Allegro*',
'CogView3Plus*', 'CogView4*', 'Cosmos*', 'EasyAnimate*', 'SkyReelsV2*', 'StableDiffusion3*',
'ConsisID*', 'DiT*', 'Amused*', 'Bria*', 'Lumina*', 'OmniGen*', 'PixArt*', 'Sana*', 'StableAudio*',
'VisualCloze*', 'AuraFlow*', 'Chroma*', 'ShapE*', 'HiDream*', 'HunyuanDiT*', 'HunyuanDiTPAG*'])
```

For a complete benchmark, please refer to [Benchmarks](https://github.com/vipshop/cache-dit/blob/main/bench/).


## Unified Cache API

CacheDiT works by matching specific input/output patterns as shown below.

![](https://github.com/vipshop/cache-dit/raw/main/assets/patterns-v1.png)

Call the `enable_cache()` function on a pipeline to enable cache acceleration. This function is the entry point to many of CacheDiT's features.

```python
import cache_dit
from diffusers import DiffusionPipeline 

<CopyLLMTxtMenu containerStyle="float: right; margin-left: 10px; display: inline-flex; position: relative; z-index: 10;"></CopyLLMTxtMenu>

# Can be any diffusion pipeline
pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image")

# One-line code with default cache options.
cache_dit.enable_cache(pipe) 

# Just call the pipe as normal.
output = pipe(...)

# Disable cache and run original pipe.
cache_dit.disable_cache(pipe)
```

## Automatic Block Adapter

For custom or modified pipelines or transformers not included in Diffusers, use the `BlockAdapter` in `auto` mode or via manual configuration. Please check the [BlockAdapter](https://github.com/vipshop/cache-dit/blob/main/docs/User_Guide.md#automatic-block-adapter) docs for more details. Refer to [Qwen-Image w/ BlockAdapter](https://github.com/vipshop/cache-dit/blob/main/examples/adapter/run_qwen_image_adapter.py) as an example.


```python
from cache_dit import ForwardPattern, BlockAdapter

# Use 🔥BlockAdapter with `auto` mode.
cache_dit.enable_cache(
    BlockAdapter(
        # Any DiffusionPipeline, Qwen-Image, etc.  
        pipe=pipe, auto=True,
        # Check `📚Forward Pattern Matching` documentation and hack the code of
        # of Qwen-Image, you will find that it has satisfied `FORWARD_PATTERN_1`.
        forward_pattern=ForwardPattern.Pattern_1,
    ),   
)

# Or, manually setup transformer configurations.
cache_dit.enable_cache(
    BlockAdapter(
        pipe=pipe, # Qwen-Image, etc.
        transformer=pipe.transformer,
        blocks=pipe.transformer.transformer_blocks,
        forward_pattern=ForwardPattern.Pattern_1,
    ), 
)
```

Sometimes, a Transformer class will contain more than one transformer `blocks`. For example, FLUX.1 (HiDream, Chroma, etc) contains `transformer_blocks` and `single_transformer_blocks` (with different forward patterns). The BlockAdapter is able to detect this hybrid pattern type as well. 
Refer to [FLUX.1](https://github.com/vipshop/cache-dit/blob/main/examples/adapter/run_flux_adapter.py) as an example.

```python
# For diffusers <= 0.34.0, FLUX.1 transformer_blocks and 
# single_transformer_blocks have different forward patterns.
cache_dit.enable_cache(
    BlockAdapter(
        pipe=pipe, # FLUX.1, etc.
        transformer=pipe.transformer,
        blocks=[
            pipe.transformer.transformer_blocks,
            pipe.transformer.single_transformer_blocks,
        ],
        forward_pattern=[
            ForwardPattern.Pattern_1,
            ForwardPattern.Pattern_3,
        ],
    ),
)
```

This also works if there is more than one transformer (namely `transformer` and `transformer_2`) in its structure. Refer to [Wan 2.2 MoE](https://github.com/vipshop/cache-dit/blob/main/examples/pipeline/run_wan_2.2.py) as an example.

## Patch Functor

For any pattern not included in CacheDiT, use the Patch Functor to convert the pattern into a known pattern. You need to subclass the Patch Functor and may also need to fuse the operations within the blocks for loop into block `forward`. After implementing a Patch Functor, set the `patch_functor` property in `BlockAdapter`.

![](https://github.com/vipshop/cache-dit/raw/main/assets/patch-functor.png)

Some Patch Functors are already provided in CacheDiT, [HiDreamPatchFunctor](https://github.com/vipshop/cache-dit/blob/main/src/cache_dit/cache_factory/patch_functors/functor_hidream.py), [ChromaPatchFunctor](https://github.com/vipshop/cache-dit/blob/main/src/cache_dit/cache_factory/patch_functors/functor_chroma.py), etc.

```python
@BlockAdapterRegistry.register("HiDream")
def hidream_adapter(pipe, **kwargs) -> BlockAdapter:
    from diffusers import HiDreamImageTransformer2DModel
    from cache_dit.cache_factory.patch_functors import HiDreamPatchFunctor

    assert isinstance(pipe.transformer, HiDreamImageTransformer2DModel)
    return BlockAdapter(
        pipe=pipe,
        transformer=pipe.transformer,
        blocks=[
            pipe.transformer.double_stream_blocks,
            pipe.transformer.single_stream_blocks,
        ],
        forward_pattern=[
            ForwardPattern.Pattern_0,
            ForwardPattern.Pattern_3,
        ],
        # NOTE: Setup your custom patch functor here.
        patch_functor=HiDreamPatchFunctor(),
        **kwargs,
    )
```

Finally, you can call the `cache_dit.summary()` function on a pipeline after its completed inference to get the cache acceleration details.

```python
stats = cache_dit.summary(pipe)
```

```python
⚡️Cache Steps and Residual Diffs Statistics: QwenImagePipeline

| Cache Steps | Diffs Min | Diffs P25 | Diffs P50 | Diffs P75 | Diffs P95 | Diffs Max |
|-------------|-----------|-----------|-----------|-----------|-----------|-----------|
| 23          | 0.045     | 0.084     | 0.114     | 0.147     | 0.241     | 0.297     |
```

## DBCache: Dual Block Cache  

![](https://github.com/vipshop/cache-dit/raw/main/assets/dbcache-v1.png)

DBCache (Dual Block Caching) supports different configurations of compute blocks (F8B12, etc.) to enable a balanced trade-off between performance and precision.
- Fn_compute_blocks: Specifies that DBCache uses the **first n** Transformer blocks to fit the information at time step t, enabling the calculation of a more stable L1 diff and delivering more accurate information to subsequent blocks.
- Bn_compute_blocks: Further fuses approximate information in the **last n** Transformer blocks to enhance prediction accuracy. These blocks act as an auto-scaler for approximate hidden states that use residual cache.


```python
import cache_dit
from diffusers import FluxPipeline

pipe_or_adapter = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16,
).to("cuda")

# Default options, F8B0, 8 warmup steps, and unlimited cached 
# steps for good balance between performance and precision
cache_dit.enable_cache(pipe_or_adapter)

# Custom options, F8B8, higher precision
from cache_dit import BasicCacheConfig

cache_dit.enable_cache(
    pipe_or_adapter,
    cache_config=BasicCacheConfig(
        max_warmup_steps=8,  # steps do not cache
        max_cached_steps=-1, # -1 means no limit
        Fn_compute_blocks=8, # Fn, F8, etc.
        Bn_compute_blocks=8, # Bn, B8, etc.
        residual_diff_threshold=0.12,
    ),
)
```  
Check the [DBCache](https://github.com/vipshop/cache-dit/blob/main/docs/DBCache.md) and [User Guide](https://github.com/vipshop/cache-dit/blob/main/docs/User_Guide.md#dbcache) docs for more design details.

## TaylorSeer Calibrator

The [TaylorSeers](https://huggingface.co/papers/2503.06923) algorithm further improves the precision of DBCache in cases where the cached steps are large (Hybrid TaylorSeer + DBCache). At timesteps with significant intervals, the feature similarity in diffusion models decreases substantially, significantly harming the generation quality. 

TaylorSeer employs a differential method to approximate the higher-order derivatives of features and predict features in future timesteps with Taylor series expansion. The TaylorSeer implemented in CacheDiT supports both hidden states and residual cache types. F_pred can be a residual cache or a hidden-state cache.

```python
from cache_dit import BasicCacheConfig, TaylorSeerCalibratorConfig

cache_dit.enable_cache(
    pipe_or_adapter,
    # Basic DBCache w/ FnBn configurations
    cache_config=BasicCacheConfig(
        max_warmup_steps=8,  # steps do not cache
        max_cached_steps=-1, # -1 means no limit
        Fn_compute_blocks=8, # Fn, F8, etc.
        Bn_compute_blocks=8, # Bn, B8, etc.
        residual_diff_threshold=0.12,
    ),
    # Then, you can use the TaylorSeer Calibrator to approximate 
    # the values in cached steps, taylorseer_order default is 1.
    calibrator_config=TaylorSeerCalibratorConfig(
        taylorseer_order=1,
    ),
)
``` 

> [!TIP]  
> The `Bn_compute_blocks` parameter of DBCache can be set to `0` if you use TaylorSeer as the calibrator for approximate hidden states. DBCache's `Bn_compute_blocks` also acts as a calibrator, so you can choose either `Bn_compute_blocks` > 0 or TaylorSeer. We recommend using the configuration scheme of TaylorSeer + DBCache FnB0.

## Hybrid Cache CFG

CacheDiT supports caching for CFG (classifier-free guidance). For models that fuse CFG and non-CFG into a single forward step, or models that do not include CFG in the forward step, please set `enable_separate_cfg` parameter  to `False (default, None)`. Otherwise, set it to `True`. 

```python
from cache_dit import BasicCacheConfig

cache_dit.enable_cache(
    pipe_or_adapter, 
    cache_config=BasicCacheConfig(
        ...,
        # For example, set it as True for Wan 2.1, Qwen-Image 
        # and set it as False for FLUX.1, HunyuanVideo, etc.
        enable_separate_cfg=True,
    ),
)
```

## torch.compile

CacheDiT is designed to work with torch.compile for even better performance. Call `torch.compile` after enabling the cache.


```python
cache_dit.enable_cache(pipe)

# Compile the Transformer module
pipe.transformer = torch.compile(pipe.transformer)
```

If you're using CacheDiT with dynamic input shapes, consider increasing the `recompile_limit` of `torch._dynamo`. Otherwise, the `recompile_limit` error may be triggered, causing the module to fall back to eager mode. 

```python
torch._dynamo.config.recompile_limit = 96  # default is 8
torch._dynamo.config.accumulated_recompile_limit = 2048  # default is 256
```

Please check [perf.py](https://github.com/vipshop/cache-dit/blob/main/bench/perf.py) for more details.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/cache_dit.md" />

### Compiling and offloading quantized models
https://huggingface.co/docs/diffusers/main/optimization/speed-memory-optims.md

# Compiling and offloading quantized models

Optimizing models often involves trade-offs between [inference speed](./fp16) and [memory-usage](./memory). For instance, while [caching](./cache) can boost inference speed, it also increases memory consumption since it needs to store the outputs of intermediate attention layers. A more balanced optimization strategy combines quantizing a model, [torch.compile](./fp16#torchcompile) and various [offloading methods](./memory#offloading).

> [!TIP]
> Check the [torch.compile](./fp16#torchcompile) guide to learn more about compilation and how they can be applied here. For example, regional compilation can significantly reduce compilation time without giving up any speedups. 

For image generation, combining quantization and [model offloading](./memory#model-offloading) can often give the best trade-off between quality, speed, and memory. Group offloading is not as effective for image generation because it is usually not possible to *fully* overlap data transfer if the compute kernel finishes faster. This results in some communication overhead between the CPU and GPU.

For video generation, combining quantization and [group-offloading](./memory#group-offloading) tends to be better because video models are more compute-bound. 

The table below provides a comparison of optimization strategy combinations and their impact on latency and memory-usage for Flux.

| combination | latency (s) | memory-usage (GB) |
|---|---|---|
| quantization  | 32.602 | 14.9453 |
| quantization, torch.compile  | 25.847 | 14.9448 |
| quantization, torch.compile, model CPU offloading | 32.312 | 12.2369 |

<small>These results are benchmarked on Flux with a RTX 4090. The transformer and text_encoder components are quantized. Refer to the <a href="https://gist.github.com/sayakpaul/0db9d8eeeb3d2a0e5ed7cf0d9ca19b7d">benchmarking script</a> if you're interested in evaluating your own model.</small>

This guide will show you how to compile and offload a quantized model with [bitsandbytes](../quantization/bitsandbytes#torchcompile). Make sure you are using [PyTorch nightly](https://pytorch.org/get-started/locally/) and the latest version of bitsandbytes.

```bash
pip install -U bitsandbytes
```

## Quantization and torch.compile

Start by [quantizing](../quantization/overview) a model to reduce the memory required for storage and [compiling](./fp16#torchcompile) it to accelerate inference.

Configure the [Dynamo](https://docs.pytorch.org/docs/stable/torch.compiler_dynamo_overview.html) `capture_dynamic_output_shape_ops = True` to handle dynamic outputs when compiling bitsandbytes models.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

torch._dynamo.config.capture_dynamic_output_shape_ops = True

# quantize
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder_2"],
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

# compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer.compile(mode="max-autotune", fullgraph=True)
pipeline("""
    cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
    highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
).images[0]
```

## Quantization, torch.compile, and offloading

In addition to quantization and torch.compile, try offloading if you need to reduce memory-usage further. Offloading moves various layers or model components from the CPU to the GPU as needed for computations.

Configure the [Dynamo](https://docs.pytorch.org/docs/stable/torch.compiler_dynamo_overview.html) `cache_size_limit` during offloading to avoid excessive recompilation and set `capture_dynamic_output_shape_ops = True` to handle dynamic outputs when compiling bitsandbytes models.

<hfoptions id="offloading">
<hfoption id="model CPU offloading">

[Model CPU offloading](./memory#model-offloading) moves an individual pipeline component, like the transformer model, to the GPU when it is needed for computation. Otherwise, it is offloaded to the CPU.

```py
import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

torch._dynamo.config.cache_size_limit = 1000
torch._dynamo.config.capture_dynamic_output_shape_ops = True

# quantize
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder_2"],
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

# model CPU offloading
pipeline.enable_model_cpu_offload()

# compile
pipeline.transformer.compile()
pipeline(
    "cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain"
).images[0]
```

</hfoption>
<hfoption id="group offloading">

[Group offloading](./memory#group-offloading) moves the internal layers of an individual pipeline component, like the transformer model, to the GPU for computation and offloads it when it's not required. At the same time, it uses the [CUDA stream](./memory#cuda-stream) feature to prefetch the next layer for execution.

By overlapping computation and data transfer, it is faster than model CPU offloading while also saving memory. 

```py
# pip install ftfy
import torch
from diffusers import AutoModel, DiffusionPipeline
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import UMT5EncoderModel

torch._dynamo.config.cache_size_limit = 1000
torch._dynamo.config.capture_dynamic_output_shape_ops = True

# quantize
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder"],
)

text_encoder = UMT5EncoderModel.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers", subfolder="text_encoder", torch_dtype=torch.bfloat16
)
pipeline = DiffusionPipeline.from_pretrained(
    "Wan-AI/Wan2.1-T2V-14B-Diffusers",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

# group offloading
onload_device = torch.device("cuda")
offload_device = torch.device("cpu")

pipeline.transformer.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True,
    non_blocking=True
)
pipeline.vae.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True,
    non_blocking=True
)
apply_group_offloading(
    pipeline.text_encoder,
    onload_device=onload_device,
    offload_type="leaf_level",
    use_stream=True,
    non_blocking=True
)

# compile
pipeline.transformer.compile()

prompt = """
The camera rushes from far to near in a low-angle shot, 
revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in 
for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground. 
Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic 
shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
"""
negative_prompt = """
Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, 
low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, 
misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
"""

output = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

</hfoption>
</hfoptions>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/speed-memory-optims.md" />

### Attention backends
https://huggingface.co/docs/diffusers/main/optimization/attention_backends.md

# Attention backends

> [!NOTE]
> The attention dispatcher is an experimental feature. Please open an issue if you have any feedback or encounter any problems.

Diffusers provides several optimized attention algorithms that are more memory and computationally efficient through it's *attention dispatcher*. The dispatcher acts as a router for managing and switching between different attention implementations and provides a unified interface for interacting with them.

Refer to the table below for an overview of the available attention families and to the [Available backends](#available-backends) section for a more complete list.

| attention family | main feature |
|---|---|
| FlashAttention | minimizes memory reads/writes through tiling and recomputation |
| SageAttention | quantizes attention to int8 |
| PyTorch native | built-in PyTorch implementation using [scaled_dot_product_attention](./fp16#scaled-dot-product-attention) |
| xFormers | memory-efficient attention with support for various attention kernels |

This guide will show you how to set and use the different attention backends.

## set_attention_backend

The [set_attention_backend()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.set_attention_backend) method iterates through all the modules in the model and sets the appropriate attention backend to use. The attention backend setting persists until [reset_attention_backend()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.reset_attention_backend) is called.

The example below demonstrates how to enable the `_flash_3_hub` implementation for FlashAttention-3 from the [kernel](https://github.com/huggingface/kernels) library, which allows you to instantly use optimized compute kernels from the Hub without requiring any setup.

> [!NOTE]
> FlashAttention-3 is not supported for non-Hopper architectures, in which case, use FlashAttention with `set_attention_backend("flash")`.

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
    "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)
pipeline.transformer.set_attention_backend("_flash_3_hub")

prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
pipeline(prompt).images[0]
```

To restore the default attention backend, call [reset_attention_backend()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.reset_attention_backend).

```py
pipeline.transformer.reset_attention_backend()
```

## attention_backend context manager

The [attention_backend](https://github.com/huggingface/diffusers/blob/5e181eddfe7e44c1444a2511b0d8e21d177850a0/src/diffusers/models/attention_dispatch.py#L225) context manager temporarily sets an attention backend for a model within the context. Outside the context, the default attention (PyTorch's native scaled dot product attention) is used. This is useful if you want to use different backends for different parts of a pipeline or if you want to test the different backends.

```py
import torch
from diffusers import QwenImagePipeline

pipeline = QwenImagePipeline.from_pretrained(
    "Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
)
prompt = """
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""

with attention_backend("_flash_3_hub"):
    image = pipeline(prompt).images[0]
```

> [!TIP]
> Most attention backends support `torch.compile` without graph breaks and can be used to further speed up inference.

## Available backends

Refer to the table below for a complete list of available attention backends and their variants.

<details>
<summary>Expand</summary>

| Backend Name | Family | Description |
|--------------|--------|-------------|
| `native` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Default backend using PyTorch's scaled_dot_product_attention |
| `flex` | [FlexAttention](https://docs.pytorch.org/docs/stable/nn.attention.flex_attention.html#module-torch.nn.attention.flex_attention) | PyTorch FlexAttention implementation |
| `_native_cudnn` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | CuDNN-optimized attention |
| `_native_efficient` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Memory-efficient attention |
| `_native_flash` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | PyTorch's FlashAttention |
| `_native_math` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | Math-based attention (fallback) |
| `_native_npu` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | NPU-optimized attention |
| `_native_xla` | [PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend) | XLA-optimized attention |
| `flash` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-2 |
| `flash_varlen` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | Variable length FlashAttention |
| `_flash_3` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-3 |
| `_flash_varlen_3` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | Variable length FlashAttention-3 |
| `_flash_3_hub` | [FlashAttention](https://github.com/Dao-AILab/flash-attention) | FlashAttention-3 from kernels |
| `sage` | [SageAttention](https://github.com/thu-ml/SageAttention) | Quantized attention (INT8 QK) |
| `sage_varlen` | [SageAttention](https://github.com/thu-ml/SageAttention) | Variable length SageAttention |
| `_sage_qk_int8_pv_fp8_cuda` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP8 PV (CUDA) |
| `_sage_qk_int8_pv_fp8_cuda_sm90` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP8 PV (SM90) |
| `_sage_qk_int8_pv_fp16_cuda` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP16 PV (CUDA) |
| `_sage_qk_int8_pv_fp16_triton` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP16 PV (Triton) |
| `xformers` | [xFormers](https://github.com/facebookresearch/xformers) | Memory-efficient attention |

</details>

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/attention_backends.md" />

### ParaAttention
https://huggingface.co/docs/diffusers/main/optimization/para_attn.md

# ParaAttention

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-performance.png">
</div>
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-performance.png">
</div>


Large image and video generation models, such as [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) and [HunyuanVideo](https://huggingface.co/tencent/HunyuanVideo), can be an inference challenge for real-time applications and deployment because of their size.

[ParaAttention](https://github.com/chengzeyi/ParaAttention) is a library that implements **context parallelism** and **first block cache**, and can be combined with other techniques (torch.compile, fp8 dynamic quantization), to accelerate inference.

This guide will show you how to apply ParaAttention to FLUX.1-dev and HunyuanVideo on NVIDIA L20 GPUs.
No optimizations are applied for our baseline benchmark, except for HunyuanVideo to avoid out-of-memory errors.

Our baseline benchmark shows that FLUX.1-dev is able to generate a 1024x1024 resolution image in 28 steps in 26.36 seconds, and HunyuanVideo is able to generate 129 frames at 720p resolution in 30 steps in 3675.71 seconds.

> [!TIP]
> For even faster inference with context parallelism, try using NVIDIA A100 or H100 GPUs (if available) with NVLink support, especially when there is a large number of GPUs.

## First Block Cache

Caching the output of the transformers blocks in the model and reusing them in the next inference steps reduces the computation cost and makes inference faster.

However, it is hard to decide when to reuse the cache to ensure quality generated images or videos. ParaAttention directly uses the **residual difference of the first transformer block output** to approximate the difference among model outputs. When the difference is small enough, the residual difference of previous inference steps is reused. In other words, the denoising step is skipped.

This achieves a 2x speedup on FLUX.1-dev and HunyuanVideo inference with very good quality.

<figure>
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/ada-cache.png" alt="Cache in Diffusion Transformer" />
    <figcaption>How AdaCache works, First Block Cache is a variant of it</figcaption>
</figure>

<hfoptions id="first-block-cache">
<hfoption id="FLUX-1.dev">

To apply first block cache on FLUX.1-dev, call `apply_cache_on_pipe` as shown below. 0.08 is the default residual difference value for FLUX models.

```python
import time
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16,
).to("cuda")

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(pipe, residual_diff_threshold=0.08)

# Enable memory savings
# pipe.enable_model_cpu_offload()
# pipe.enable_sequential_cpu_offload()

begin = time.time()
image = pipe(
    "A cat holding a sign that says hello world",
    num_inference_steps=28,
).images[0]
end = time.time()
print(f"Time: {end - begin:.2f}s")

print("Saving image to flux.png")
image.save("flux.png")
```

| Optimizations | Original | FBCache rdt=0.06 | FBCache rdt=0.08 | FBCache rdt=0.10 | FBCache rdt=0.12 |
| - | - | - | - | - | - |
| Preview | ![Original](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-original.png) | ![FBCache rdt=0.06](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.06.png) | ![FBCache rdt=0.08](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.08.png) | ![FBCache rdt=0.10](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.10.png) | ![FBCache rdt=0.12](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.12.png) |
| Wall Time (s) | 26.36 | 21.83 | 17.01 | 16.00 | 13.78 |

First Block Cache reduced the inference speed to 17.01 seconds compared to the baseline, or 1.55x faster, while maintaining nearly zero quality loss.

</hfoption>
<hfoption id="HunyuanVideo">

To apply First Block Cache on HunyuanVideo, `apply_cache_on_pipe` as shown below. 0.06 is the default residual difference value for HunyuanVideo models.

```python
import time
import torch
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video

model_id = "tencent/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
    model_id,
    subfolder="transformer",
    torch_dtype=torch.bfloat16,
    revision="refs/pr/18",
)
pipe = HunyuanVideoPipeline.from_pretrained(
    model_id,
    transformer=transformer,
    torch_dtype=torch.float16,
    revision="refs/pr/18",
).to("cuda")

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(pipe, residual_diff_threshold=0.6)

pipe.vae.enable_tiling()

begin = time.time()
output = pipe(
    prompt="A cat walks on the grass, realistic",
    height=720,
    width=1280,
    num_frames=129,
    num_inference_steps=30,
).frames[0]
end = time.time()
print(f"Time: {end - begin:.2f}s")

print("Saving video to hunyuan_video.mp4")
export_to_video(output, "hunyuan_video.mp4", fps=15)
```

<video controls>
  <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-original.mp4" type="video/mp4">
  Your browser does not support the video tag.
</video>

<small> HunyuanVideo without FBCache </small>

<video controls>
  <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-fbc.mp4" type="video/mp4">
  Your browser does not support the video tag.
</video>

<small> HunyuanVideo with FBCache </small>

First Block Cache reduced the inference speed to 2271.06 seconds compared to the baseline, or 1.62x faster, while maintaining nearly zero quality loss.

</hfoption>
</hfoptions>

## fp8 quantization

fp8 with dynamic quantization further speeds up inference and reduces memory usage. Both the activations and weights must be quantized in order to use the 8-bit [NVIDIA Tensor Cores](https://www.nvidia.com/en-us/data-center/tensor-cores/).

Use `float8_weight_only` and `float8_dynamic_activation_float8_weight` to quantize the text encoder and transformer model.

The default quantization method is per tensor quantization, but if your GPU supports row-wise quantization, you can also try it for better accuracy.

Install [torchao](https://github.com/pytorch/ao/tree/main) with the command below.

```bash
pip3 install -U torch torchao
```

[torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) with `mode="max-autotune-no-cudagraphs"` or `mode="max-autotune"` selects the best kernel for performance. Compilation can take a long time if it's the first time the model is called, but it is worth it once the model has been compiled.

This example only quantizes the transformer model, but you can also quantize the text encoder to reduce memory usage even more.

> [!TIP]
> Dynamic quantization can significantly change the distribution of the model output, so you need to change the `residual_diff_threshold` to a larger value for it to take effect.

<hfoptions id="fp8-quantization">
<hfoption id="FLUX-1.dev">

```python
import time
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16,
).to("cuda")

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(
    pipe,
    residual_diff_threshold=0.12,  # Use a larger value to make the cache take effect
)

from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only

quantize_(pipe.text_encoder, float8_weight_only())
quantize_(pipe.transformer, float8_dynamic_activation_float8_weight())
pipe.transformer = torch.compile(
   pipe.transformer, mode="max-autotune-no-cudagraphs",
)

# Enable memory savings
# pipe.enable_model_cpu_offload()
# pipe.enable_sequential_cpu_offload()

for i in range(2):
    begin = time.time()
    image = pipe(
        "A cat holding a sign that says hello world",
        num_inference_steps=28,
    ).images[0]
    end = time.time()
    if i == 0:
        print(f"Warm up time: {end - begin:.2f}s")
    else:
        print(f"Time: {end - begin:.2f}s")

print("Saving image to flux.png")
image.save("flux.png")
```

fp8 dynamic quantization and torch.compile reduced the inference speed to 7.56 seconds compared to the baseline, or 3.48x faster.

</hfoption>
<hfoption id="HunyuanVideo">

```python
import time
import torch
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video

model_id = "tencent/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
    model_id,
    subfolder="transformer",
    torch_dtype=torch.bfloat16,
    revision="refs/pr/18",
)
pipe = HunyuanVideoPipeline.from_pretrained(
    model_id,
    transformer=transformer,
    torch_dtype=torch.float16,
    revision="refs/pr/18",
).to("cuda")

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(pipe)

from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only

quantize_(pipe.text_encoder, float8_weight_only())
quantize_(pipe.transformer, float8_dynamic_activation_float8_weight())
pipe.transformer = torch.compile(
   pipe.transformer, mode="max-autotune-no-cudagraphs",
)

# Enable memory savings
pipe.vae.enable_tiling()
# pipe.enable_model_cpu_offload()
# pipe.enable_sequential_cpu_offload()

for i in range(2):
    begin = time.time()
    output = pipe(
        prompt="A cat walks on the grass, realistic",
        height=720,
        width=1280,
        num_frames=129,
        num_inference_steps=1 if i == 0 else 30,
    ).frames[0]
    end = time.time()
    if i == 0:
        print(f"Warm up time: {end - begin:.2f}s")
    else:
        print(f"Time: {end - begin:.2f}s")

print("Saving video to hunyuan_video.mp4")
export_to_video(output, "hunyuan_video.mp4", fps=15)
```

A NVIDIA L20 GPU only has 48GB memory and could face out-of-memory (OOM) errors after compilation and if `enable_model_cpu_offload` isn't called because HunyuanVideo has very large activation tensors when running with high resolution and large number of frames. For GPUs with less than 80GB of memory, you can try reducing the resolution and number of frames to avoid OOM errors.

Large video generation models are usually bottlenecked by the attention computations rather than the fully connected layers. These models don't significantly benefit from quantization and torch.compile.

</hfoption>
</hfoptions>

## Context Parallelism

Context Parallelism parallelizes inference and scales with multiple GPUs. The ParaAttention compositional design allows you to combine Context Parallelism with First Block Cache and dynamic quantization.

> [!TIP]
> Refer to the [ParaAttention](https://github.com/chengzeyi/ParaAttention/tree/main) repository for detailed instructions and examples of how to scale inference with multiple GPUs.

If the inference process needs to be persistent and serviceable, it is suggested to use [torch.multiprocessing](https://pytorch.org/docs/stable/multiprocessing.html) to write your own inference processor. This can eliminate the overhead of launching the process and loading and recompiling the model.

<hfoptions id="context-parallelism">
<hfoption id="FLUX-1.dev">

The code sample below combines First Block Cache, fp8 dynamic quantization, torch.compile, and Context Parallelism for the fastest inference speed.

```python
import time
import torch
import torch.distributed as dist
from diffusers import FluxPipeline

dist.init_process_group()

torch.cuda.set_device(dist.get_rank())

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16,
).to("cuda")

from para_attn.context_parallel import init_context_parallel_mesh
from para_attn.context_parallel.diffusers_adapters import parallelize_pipe
from para_attn.parallel_vae.diffusers_adapters import parallelize_vae

mesh = init_context_parallel_mesh(
    pipe.device.type,
    max_ring_dim_size=2,
)
parallelize_pipe(
    pipe,
    mesh=mesh,
)
parallelize_vae(pipe.vae, mesh=mesh._flatten())

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(
    pipe,
    residual_diff_threshold=0.12,  # Use a larger value to make the cache take effect
)

from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only

quantize_(pipe.text_encoder, float8_weight_only())
quantize_(pipe.transformer, float8_dynamic_activation_float8_weight())
torch._inductor.config.reorder_for_compute_comm_overlap = True
pipe.transformer = torch.compile(
   pipe.transformer, mode="max-autotune-no-cudagraphs",
)

# Enable memory savings
# pipe.enable_model_cpu_offload(gpu_id=dist.get_rank())
# pipe.enable_sequential_cpu_offload(gpu_id=dist.get_rank())

for i in range(2):
    begin = time.time()
    image = pipe(
        "A cat holding a sign that says hello world",
        num_inference_steps=28,
        output_type="pil" if dist.get_rank() == 0 else "pt",
    ).images[0]
    end = time.time()
    if dist.get_rank() == 0:
        if i == 0:
            print(f"Warm up time: {end - begin:.2f}s")
        else:
            print(f"Time: {end - begin:.2f}s")

if dist.get_rank() == 0:
    print("Saving image to flux.png")
    image.save("flux.png")

dist.destroy_process_group()
```

Save to `run_flux.py` and launch it with [torchrun](https://pytorch.org/docs/stable/elastic/run.html).

```bash
# Use --nproc_per_node to specify the number of GPUs
torchrun --nproc_per_node=2 run_flux.py
```

Inference speed is reduced to 8.20 seconds compared to the baseline, or 3.21x faster, with 2 NVIDIA L20 GPUs. On 4 L20s, inference speed is 3.90 seconds, or 6.75x faster.

</hfoption>
<hfoption id="HunyuanVideo">

The code sample below combines First Block Cache and Context Parallelism for the fastest inference speed.

```python
import time
import torch
import torch.distributed as dist
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from diffusers.utils import export_to_video

dist.init_process_group()

torch.cuda.set_device(dist.get_rank())

model_id = "tencent/HunyuanVideo"
transformer = HunyuanVideoTransformer3DModel.from_pretrained(
    model_id,
    subfolder="transformer",
    torch_dtype=torch.bfloat16,
    revision="refs/pr/18",
)
pipe = HunyuanVideoPipeline.from_pretrained(
    model_id,
    transformer=transformer,
    torch_dtype=torch.float16,
    revision="refs/pr/18",
).to("cuda")

from para_attn.context_parallel import init_context_parallel_mesh
from para_attn.context_parallel.diffusers_adapters import parallelize_pipe
from para_attn.parallel_vae.diffusers_adapters import parallelize_vae

mesh = init_context_parallel_mesh(
    pipe.device.type,
)
parallelize_pipe(
    pipe,
    mesh=mesh,
)
parallelize_vae(pipe.vae, mesh=mesh._flatten())

from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe

apply_cache_on_pipe(pipe)

# from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only
#
# torch._inductor.config.reorder_for_compute_comm_overlap = True
#
# quantize_(pipe.text_encoder, float8_weight_only())
# quantize_(pipe.transformer, float8_dynamic_activation_float8_weight())
# pipe.transformer = torch.compile(
#    pipe.transformer, mode="max-autotune-no-cudagraphs",
# )

# Enable memory savings
pipe.vae.enable_tiling()
# pipe.enable_model_cpu_offload(gpu_id=dist.get_rank())
# pipe.enable_sequential_cpu_offload(gpu_id=dist.get_rank())

for i in range(2):
    begin = time.time()
    output = pipe(
        prompt="A cat walks on the grass, realistic",
        height=720,
        width=1280,
        num_frames=129,
        num_inference_steps=1 if i == 0 else 30,
        output_type="pil" if dist.get_rank() == 0 else "pt",
    ).frames[0]
    end = time.time()
    if dist.get_rank() == 0:
        if i == 0:
            print(f"Warm up time: {end - begin:.2f}s")
        else:
            print(f"Time: {end - begin:.2f}s")

if dist.get_rank() == 0:
    print("Saving video to hunyuan_video.mp4")
    export_to_video(output, "hunyuan_video.mp4", fps=15)

dist.destroy_process_group()
```

Save to `run_hunyuan_video.py` and launch it with [torchrun](https://pytorch.org/docs/stable/elastic/run.html).

```bash
# Use --nproc_per_node to specify the number of GPUs
torchrun --nproc_per_node=8 run_hunyuan_video.py
```

Inference speed is reduced to 649.23 seconds compared to the baseline, or 5.66x faster, with 8 NVIDIA L20 GPUs.

</hfoption>
</hfoptions>

## Benchmarks

<hfoptions id="conclusion">
<hfoption id="FLUX-1.dev">

| GPU Type | Number of GPUs | Optimizations | Wall Time (s) | Speedup |
| - | - | - | - | - |
| NVIDIA L20 | 1 | Baseline | 26.36 | 1.00x |
| NVIDIA L20 | 1 | FBCache (rdt=0.08) | 17.01 | 1.55x |
| NVIDIA L20 | 1 | FP8 DQ | 13.40 | 1.96x |
| NVIDIA L20 | 1 | FBCache (rdt=0.12) + FP8 DQ | 7.56 | 3.48x |
| NVIDIA L20 | 2 | FBCache (rdt=0.12) + FP8 DQ + CP | 4.92 | 5.35x |
| NVIDIA L20 | 4 | FBCache (rdt=0.12) + FP8 DQ + CP | 3.90 | 6.75x |

</hfoption>
<hfoption id="HunyuanVideo">

| GPU Type | Number of GPUs | Optimizations | Wall Time (s) | Speedup |
| - | - | - | - | - |
| NVIDIA L20 | 1 | Baseline | 3675.71 | 1.00x |
| NVIDIA L20 | 1 | FBCache | 2271.06 | 1.62x |
| NVIDIA L20 | 2 | FBCache + CP | 1132.90 | 3.24x |
| NVIDIA L20 | 4 | FBCache + CP | 718.15 | 5.12x |
| NVIDIA L20 | 8 | FBCache + CP | 649.23 | 5.66x |

</hfoption>
</hfoptions>


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/para_attn.md" />

### Metal Performance Shaders (MPS)
https://huggingface.co/docs/diffusers/main/optimization/mps.md

# Metal Performance Shaders (MPS)

> [!TIP]
> Pipelines with a <img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22"> badge indicate a model can take advantage of the MPS backend on Apple silicon devices for faster inference. Feel free to open a [Pull Request](https://github.com/huggingface/diffusers/compare) to add this badge to pipelines that are missing it.

🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch [`mps`](https://pytorch.org/docs/stable/notes/mps.html) device, which uses the Metal framework to leverage the GPU on MacOS devices. You'll need to have:

- macOS computer with Apple silicon (M1/M2) hardware
- macOS 12.6 or later (13.0 or later recommended)
- arm64 version of Python
- [PyTorch 2.0](https://pytorch.org/get-started/locally/) (recommended) or 1.13 (minimum version supported for `mps`)

The `mps` backend uses PyTorch's `.to()` interface to move the Stable Diffusion pipeline on to your M1 or M2 device:

```python
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
pipe = pipe.to("mps")

# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image
```

> [!WARNING]
> The PyTorch [mps](https://pytorch.org/docs/stable/notes/mps.html) backend does not support NDArray sizes greater than `2**32`. Please open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) if you encounter this problem so we can investigate.

If you're using **PyTorch 1.13**, you need to "prime" the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result.

```diff
  from diffusers import DiffusionPipeline

  pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5").to("mps")
  pipe.enable_attention_slicing()

  prompt = "a photo of an astronaut riding a horse on mars"
  # First-time "warmup" pass if PyTorch version is 1.13
+ _ = pipe(prompt, num_inference_steps=1)

  # Results match those from the CPU device after the warmup pass.
  image = pipe(prompt).images[0]
```

## Troubleshoot

This section lists some common issues with using the `mps` backend and how to solve them.

### Attention slicing

M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance.

To prevent this from happening, we recommend *attention slicing* to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the [enable_attention_slicing()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_attention_slicing) function on your pipeline:

```py
from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
pipeline.enable_attention_slicing()
```

Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we've observed *better performance* in most Apple silicon computers unless you have 64GB of RAM or more.

### Batch inference

Generating multiple prompts in a batch can crash or fail to work reliably. If this is the case, try iterating instead of batching.

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/mps.md" />

### AWS Neuron
https://huggingface.co/docs/diffusers/main/optimization/neuron.md

# AWS Neuron

Diffusers functionalities are available on [AWS Inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/), which are EC2 instances powered by [Neuron machine learning accelerators](https://aws.amazon.com/machine-learning/inferentia/). These instances aim to provide better compute performance (higher throughput, lower latency) with good cost-efficiency, making them good candidates for AWS users to deploy diffusion models to production.

[Optimum Neuron](https://huggingface.co/docs/optimum-neuron/en/index) is the interface between Hugging Face libraries and AWS Accelerators, including AWS [Trainium](https://aws.amazon.com/machine-learning/trainium/) and AWS [Inferentia](https://aws.amazon.com/machine-learning/inferentia/). It supports many of the features in Diffusers with similar APIs, so it is easier to learn if you're already familiar with Diffusers. Once you have created an AWS Inf2 instance, install Optimum Neuron.

```bash
python -m pip install --upgrade-strategy eager optimum[neuronx]
```

> [!TIP]
> We provide pre-built [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) (DLAMI) and Optimum Neuron containers for Amazon SageMaker. It's recommended to correctly set up your environment.

The example below demonstrates how to generate images with the Stable Diffusion XL model on an inf2.8xlarge instance (you can switch to cheaper inf2.xlarge instances once the model is compiled). To generate some images, use the `NeuronStableDiffusionXLPipeline` class, which is similar to the [StableDiffusionXLPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline) class in Diffusers.

Unlike Diffusers, you need to compile models in the pipeline to the Neuron format, `.neuron`. Launch the following command to export the model to the `.neuron` format.

```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \
  --batch_size 1 \
  --height 1024 `# height in pixels of generated image, eg. 768, 1024` \
  --width 1024 `# width in pixels of generated image, eg. 768, 1024` \
  --num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron_xl/
```

Now generate some images with the pre-compiled SDXL model.

```python
>>> from optimum.neuron import NeuronStableDiffusionXLPipeline

>>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
>>> prompt = "a pig with wings flying in floating US dollar banknotes in the air, skyscrapers behind, warm color palette, muted colors, detailed, 8k"
>>> image = stable_diffusion_xl(prompt).images[0]
```

<img
  src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/sdxl_pig.png"
  width="256"
  height="256"
  alt="peggy generated by sdxl on inf2"
/>

Feel free to check out more guides and examples on different use cases from the Optimum Neuron [documentation](https://huggingface.co/docs/optimum-neuron/en/inference_tutorials/stable_diffusion#generate-images-with-stable-diffusion-models-on-aws-inferentia)!


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/neuron.md" />

### ONNX Runtime
https://huggingface.co/docs/diffusers/main/optimization/onnx.md

# ONNX Runtime

🤗 [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with ONNX Runtime. You'll need to install 🤗 Optimum with the following command for ONNX Runtime support:

```bash
pip install -q optimum["onnxruntime"]
```

This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime.

## Stable Diffusion

To load and run inference, use the `ORTStableDiffusionPipeline`. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set `export=True`:

```python
from optimum.onnxruntime import ORTStableDiffusionPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5")
```

> [!WARNING]
> Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.

To export the pipeline in the ONNX format offline and use it later for inference,
use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:

```bash
optimum-cli export onnx --model stable-diffusion-v1-5/stable-diffusion-v1-5 sd_v15_onnx/
```

Then to perform inference (you don't have to specify `export=True` again):

```python
from optimum.onnxruntime import ORTStableDiffusionPipeline

model_id = "sd_v15_onnx"
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/stable_diffusion_v1_5_ort_sail_boat.png">
</div>

You can find more examples in 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/), and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting.

## Stable Diffusion XL

To load and run inference with SDXL, use the `ORTStableDiffusionXLPipeline`:

```python
from optimum.onnxruntime import ORTStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
```

To export the pipeline in the ONNX format and use it later for inference, use the [`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:

```bash
optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/
```

SDXL in the ONNX format is supported for text-to-image and image-to-image.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/onnx.md" />

### Caching
https://huggingface.co/docs/diffusers/main/optimization/cache.md

# Caching

Caching accelerates inference by storing and reusing intermediate outputs of different layers, such as attention and feedforward layers, instead of performing the entire computation at each inference step. It significantly improves generation speed at the expense of more memory and doesn't require additional training.

This guide shows you how to use the caching methods supported in Diffusers.

## Pyramid Attention Broadcast

[Pyramid Attention Broadcast (PAB)](https://huggingface.co/papers/2408.12588) is based on the observation that attention outputs aren't that different between successive timesteps of the generation process. The attention differences are smallest in the cross attention layers and are generally cached over a longer timestep range. This is followed by temporal attention and spatial attention layers.

> [!TIP]
> Not all video models have three types of attention (cross, temporal, and spatial)!

PAB can be combined with other techniques like sequence parallelism and classifier-free guidance parallelism (data parallelism) for near real-time video generation.

Set up and pass a [PyramidAttentionBroadcastConfig](/docs/diffusers/main/en/api/cache#diffusers.PyramidAttentionBroadcastConfig) to a pipeline's transformer to enable it. The `spatial_attention_block_skip_range` controls how often to skip attention calculations in the spatial attention blocks and the `spatial_attention_timestep_skip_range` is the range of timesteps to skip. Take care to choose an appropriate range because a smaller interval can lead to slower inference speeds and a larger interval can result in lower generation quality.

```python
import torch
from diffusers import CogVideoXPipeline, PyramidAttentionBroadcastConfig

pipeline = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
pipeline.to("cuda")

config = PyramidAttentionBroadcastConfig(
    spatial_attention_block_skip_range=2,
    spatial_attention_timestep_skip_range=(100, 800),
    current_timestep_callback=lambda: pipe.current_timestep,
)
pipeline.transformer.enable_cache(config)
```

## FasterCache

[FasterCache](https://huggingface.co/papers/2410.19355) caches and reuses attention features similar to [PAB](#pyramid-attention-broadcast) since output differences are small for each successive timestep.

This method may also choose to skip the unconditional branch prediction, when using classifier-free guidance for sampling (common in most base models), and estimate it from the conditional branch prediction if there is significant redundancy in the predicted latent outputs between successive timesteps.

Set up and pass a [FasterCacheConfig](/docs/diffusers/main/en/api/cache#diffusers.FasterCacheConfig) to a pipeline's transformer to enable it.

```python
import torch
from diffusers import CogVideoXPipeline, FasterCacheConfig

pipe line= CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
pipeline.to("cuda")

config = FasterCacheConfig(
    spatial_attention_block_skip_range=2,
    spatial_attention_timestep_skip_range=(-1, 681),
    current_timestep_callback=lambda: pipe.current_timestep,
    attention_weight_callback=lambda _: 0.3,
    unconditional_batch_skip_range=5,
    unconditional_batch_timestep_skip_range=(-1, 781),
    tensor_format="BFCHW",
)
pipeline.transformer.enable_cache(config)
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/cache.md" />

### Reduce memory usage
https://huggingface.co/docs/diffusers/main/optimization/memory.md

# Reduce memory usage

Modern diffusion models like [Flux](../api/pipelines/flux) and [Wan](../api/pipelines/wan) have billions of parameters that take up a lot of memory on your hardware for inference. This is challenging because common GPUs often don't have sufficient memory. To overcome the memory limitations, you can use more than one GPU (if available), offload some of the pipeline components to the CPU, and more.

This guide will show you how to reduce your memory usage. 

> [!TIP]
> Keep in mind these techniques may need to be adjusted depending on the model. For example, a transformer-based diffusion model may not benefit equally from these memory optimizations as a UNet-based model.

## Multiple GPUs

If you have access to more than one GPU, there a few options for efficiently loading and distributing a large model across your hardware. These features are supported by the [Accelerate](https://huggingface.co/docs/accelerate/index) library, so make sure it is installed first.

```bash
pip install -U accelerate
```

### Sharded checkpoints

Loading large checkpoints in several shards in useful because the shards are loaded one at a time. This keeps memory usage low, only requiring enough memory for the model size and the largest shard size. We recommend sharding when the fp32 checkpoint is greater than 5GB. The default shard size is 5GB.

Shard a checkpoint in [save_pretrained()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained) with the `max_shard_size` parameter.

```py
from diffusers import AutoModel

unet = AutoModel.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet"
)
unet.save_pretrained("sdxl-unet-sharded", max_shard_size="5GB")
```

Now you can use the sharded checkpoint, instead of the regular checkpoint, to save memory.

```py
import torch
from diffusers import AutoModel, StableDiffusionXLPipeline

unet = AutoModel.from_pretrained(
    "username/sdxl-unet-sharded", torch_dtype=torch.float16
)
pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    unet=unet,
    torch_dtype=torch.float16
).to("cuda")
```

### Device placement

> [!WARNING]
> Device placement is an experimental feature and the API may change. Only the `balanced` strategy is supported at the moment. We plan to support additional mapping strategies in the future.

The `device_map` parameter controls how the model components in a pipeline or the layers in an individual model are distributed across devices. 

<hfoptions id="device-map">
<hfoption id="pipeline level">

The `balanced` device placement strategy evenly splits the pipeline across all available devices.

```py
import torch
from diffusers import AutoModel, StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="balanced"
)
```

You can inspect a pipeline's device map with `hf_device_map`.

```py
print(pipeline.hf_device_map)
{'unet': 1, 'vae': 1, 'safety_checker': 0, 'text_encoder': 0}
```

</hfoption>
<hfoption id="model level">

The `device_map` is useful for loading large models, such as the Flux diffusion transformer which has 12.5B parameters. Set it to `"auto"` to automatically distribute a model across the fastest device first before moving to slower devices. Refer to the [Model sharding](../training/distributed_inference#model-sharding) docs for more details.

```py
import torch
from diffusers import AutoModel

transformer = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev", 
    subfolder="transformer",
    device_map="auto",
    torch_dtype=torch.bfloat16
)
```

You can inspect a model's device map with `hf_device_map`.

```py
print(transformer.hf_device_map)
```

</hfoption>
</hfoptions>

When designing your own `device_map`, it should be a dictionary of a model's specific module name or layer and a device identifier (an integer for GPUs, `cpu` for CPUs, and `disk` for disk).

Call `hf_device_map` on a model to see how model layers are distributed and then design your own.

```py
print(transformer.hf_device_map)
{'pos_embed': 0, 'time_text_embed': 0, 'context_embedder': 0, 'x_embedder': 0, 'transformer_blocks': 0, 'single_transformer_blocks.0': 0, 'single_transformer_blocks.1': 0, 'single_transformer_blocks.2': 0, 'single_transformer_blocks.3': 0, 'single_transformer_blocks.4': 0, 'single_transformer_blocks.5': 0, 'single_transformer_blocks.6': 0, 'single_transformer_blocks.7': 0, 'single_transformer_blocks.8': 0, 'single_transformer_blocks.9': 0, 'single_transformer_blocks.10': 'cpu', 'single_transformer_blocks.11': 'cpu', 'single_transformer_blocks.12': 'cpu', 'single_transformer_blocks.13': 'cpu', 'single_transformer_blocks.14': 'cpu', 'single_transformer_blocks.15': 'cpu', 'single_transformer_blocks.16': 'cpu', 'single_transformer_blocks.17': 'cpu', 'single_transformer_blocks.18': 'cpu', 'single_transformer_blocks.19': 'cpu', 'single_transformer_blocks.20': 'cpu', 'single_transformer_blocks.21': 'cpu', 'single_transformer_blocks.22': 'cpu', 'single_transformer_blocks.23': 'cpu', 'single_transformer_blocks.24': 'cpu', 'single_transformer_blocks.25': 'cpu', 'single_transformer_blocks.26': 'cpu', 'single_transformer_blocks.27': 'cpu', 'single_transformer_blocks.28': 'cpu', 'single_transformer_blocks.29': 'cpu', 'single_transformer_blocks.30': 'cpu', 'single_transformer_blocks.31': 'cpu', 'single_transformer_blocks.32': 'cpu', 'single_transformer_blocks.33': 'cpu', 'single_transformer_blocks.34': 'cpu', 'single_transformer_blocks.35': 'cpu', 'single_transformer_blocks.36': 'cpu', 'single_transformer_blocks.37': 'cpu', 'norm_out': 'cpu', 'proj_out': 'cpu'}
```

For example, the `device_map` below places `single_transformer_blocks.10` through `single_transformer_blocks.20` on a second GPU (`1`).

```py
import torch
from diffusers import AutoModel

device_map = {
    'pos_embed': 0, 'time_text_embed': 0, 'context_embedder': 0, 'x_embedder': 0, 'transformer_blocks': 0, 'single_transformer_blocks.0': 0, 'single_transformer_blocks.1': 0, 'single_transformer_blocks.2': 0, 'single_transformer_blocks.3': 0, 'single_transformer_blocks.4': 0, 'single_transformer_blocks.5': 0, 'single_transformer_blocks.6': 0, 'single_transformer_blocks.7': 0, 'single_transformer_blocks.8': 0, 'single_transformer_blocks.9': 0, 'single_transformer_blocks.10': 1, 'single_transformer_blocks.11': 1, 'single_transformer_blocks.12': 1, 'single_transformer_blocks.13': 1, 'single_transformer_blocks.14': 1, 'single_transformer_blocks.15': 1, 'single_transformer_blocks.16': 1, 'single_transformer_blocks.17': 1, 'single_transformer_blocks.18': 1, 'single_transformer_blocks.19': 1, 'single_transformer_blocks.20': 1, 'single_transformer_blocks.21': 'cpu', 'single_transformer_blocks.22': 'cpu', 'single_transformer_blocks.23': 'cpu', 'single_transformer_blocks.24': 'cpu', 'single_transformer_blocks.25': 'cpu', 'single_transformer_blocks.26': 'cpu', 'single_transformer_blocks.27': 'cpu', 'single_transformer_blocks.28': 'cpu', 'single_transformer_blocks.29': 'cpu', 'single_transformer_blocks.30': 'cpu', 'single_transformer_blocks.31': 'cpu', 'single_transformer_blocks.32': 'cpu', 'single_transformer_blocks.33': 'cpu', 'single_transformer_blocks.34': 'cpu', 'single_transformer_blocks.35': 'cpu', 'single_transformer_blocks.36': 'cpu', 'single_transformer_blocks.37': 'cpu', 'norm_out': 'cpu', 'proj_out': 'cpu'
}

transformer = AutoModel.from_pretrained(
    "black-forest-labs/FLUX.1-dev", 
    subfolder="transformer",
    device_map=device_map,
    torch_dtype=torch.bfloat16
)
```

Pass a dictionary mapping maximum memory usage to each device to enforce a limit. If a device is not in `max_memory`, it is ignored and pipeline components won't be distributed to it.

```py
import torch
from diffusers import AutoModel, StableDiffusionXLPipeline

max_memory = {0:"1GB", 1:"1GB"}
pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    device_map="balanced",
    max_memory=max_memory
)
```

Diffusers uses the maxmium memory of all devices by default, but if they don't fit on the GPUs, then you'll need to use a single GPU and offload to the CPU with the methods below.

- [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) only works on a single GPU but a very large model may not fit on it
- [enable_sequential_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_sequential_cpu_offload) may work but it is extremely slow and also limited to a single GPU

Use the [reset_device_map()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.reset_device_map) method to reset the `device_map`. This is necessary if you want to use methods like `.to()`, [enable_sequential_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_sequential_cpu_offload), and [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) on a pipeline that was device-mapped.

```py
pipeline.reset_device_map()
```

## VAE slicing

VAE slicing saves memory by splitting large batches of inputs into a single batch of data and separately processing them. This method works best when generating more than one image at a time.

For example, if you're generating 4 images at once, decoding would increase peak activation memory by 4x. VAE slicing reduces this by only decoding 1 image at a time instead of all 4 images at once.

Call [enable_vae_slicing()](/docs/diffusers/main/en/api/pipelines/controlnet#diffusers.StableDiffusionControlNetPipeline.enable_vae_slicing) to enable sliced VAE. You can expect a small increase in performance when decoding multi-image batches and no performance impact for single-image batches.

```py
import torch
from diffusers import AutoModel, StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
).to("cuda")
pipeline.enable_vae_slicing()
pipeline(["An astronaut riding a horse on Mars"]*32).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

> [!WARNING]
> The [AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan) and [AsymmetricAutoencoderKL](/docs/diffusers/main/en/api/models/asymmetricautoencoderkl#diffusers.AsymmetricAutoencoderKL) classes don't support slicing.

## VAE tiling

VAE tiling saves memory by dividing an image into smaller overlapping tiles instead of processing the entire image at once. This also reduces peak memory usage because the GPU is only processing a tile at a time.

Call [enable_vae_tiling()](/docs/diffusers/main/en/api/pipelines/latent_consistency_models#diffusers.LatentConsistencyModelPipeline.enable_vae_tiling) to enable VAE tiling. The generated image may have some tone variation from tile-to-tile because they're decoded separately, but there shouldn't be any obvious seams between the tiles. Tiling is disabled for resolutions lower than a pre-specified (but configurable) limit. For example, this limit is 512x512 for the VAE in [StableDiffusionPipeline](/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline).

```py
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.enable_vae_tiling()

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
pipeline(prompt, image=init_image, strength=0.5).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

> [!WARNING]
> [AutoencoderKLWan](/docs/diffusers/main/en/api/models/autoencoder_kl_wan#diffusers.AutoencoderKLWan) and [AsymmetricAutoencoderKL](/docs/diffusers/main/en/api/models/asymmetricautoencoderkl#diffusers.AsymmetricAutoencoderKL) don't support tiling.

## Offloading

Offloading strategies move not currently active layers or models to the CPU to avoid increasing GPU memory. These strategies can be combined with quantization and torch.compile to balance inference speed and memory usage.

Refer to the [Compile and offloading quantized models](./speed-memory-optims) guide for more details.

### CPU offloading

CPU offloading selectively moves weights from the GPU to the CPU. When a component is required, it is transferred to the GPU and when it isn't required, it is moved to the CPU. This method works on submodules rather than whole models. It saves memory by avoiding storing the entire model on the GPU.

CPU offloading dramatically reduces memory usage, but it is also **extremely slow** because submodules are passed back and forth multiple times between devices. It can often be impractical due to how slow it is.

> [!WARNING]
> Don't move the pipeline to CUDA before calling [enable_sequential_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_sequential_cpu_offload), otherwise the amount of memory saved is only minimal (refer to this [issue](https://github.com/huggingface/diffusers/issues/1934) for more details). This is a stateful operation that installs hooks on the model.

Call [enable_sequential_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_sequential_cpu_offload) to enable it on a pipeline.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
)
pipeline.enable_sequential_cpu_offload()

pipeline(
    prompt="An astronaut riding a horse on Mars",
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

### Model offloading

Model offloading moves entire models to the GPU instead of selectively moving *some* layers or model components. One of the main pipeline models, usually the text encoder, UNet, and VAE, is placed on the GPU while the other components are held on the CPU. Components like the UNet that run multiple times stays on the GPU until its completely finished and no longer needed. This eliminates the communication overhead of [CPU offloading](#cpu-offloading) and makes model offloading a faster alternative. The tradeoff is memory savings won't be as large.

> [!WARNING]
> Keep in mind that if models are reused outside the pipeline after hookes have been installed (see [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module) for more details), you need to run the entire pipeline and models in the expected order to properly offload them. This is a stateful operation that installs hooks on the model.

Call [enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) to enable it on a pipeline.

```py
import torch
from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
)
pipeline.enable_model_cpu_offload()

pipeline(
    prompt="An astronaut riding a horse on Mars",
    guidance_scale=0.,
    height=768,
    width=1360,
    num_inference_steps=4,
    max_sequence_length=256,
).images[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
```

[enable_model_cpu_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_model_cpu_offload) also helps when you're using the [encode_prompt()](/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLPipeline.encode_prompt) method on its own to generate the text encoders hidden state.

### Group offloading

Group offloading moves groups of internal layers ([torch.nn.ModuleList](https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html) or [torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)) to the CPU. It uses less memory than [model offloading](#model-offloading) and it is faster than [CPU offloading](#cpu-offloading) because it reduces communication overhead.

> [!WARNING]
> Group offloading may not work with all models if the forward implementation contains weight-dependent device casting of inputs because it may clash with group offloading's device casting mechanism.

Enable group offloading by configuring the `offload_type` parameter to `block_level` or `leaf_level`.

- `block_level` offloads groups of layers based on the `num_blocks_per_group` parameter. For example, if `num_blocks_per_group=2` on a model with 40 layers, 2 layers are onloaded and offloaded at a time (20 total onloads/offloads). This drastically reduces memory requirements.
- `leaf_level` offloads individual layers at the lowest level and is equivalent to [CPU offloading](#cpu-offloading). But it can be made faster if you use streams without giving up inference speed.

Group offloading is supported for entire pipelines or individual models. Applying group offloading to the entire pipeline is the easiest option while selectively applying it to individual models gives users more flexibility to use different offloading techniques for different models.

<hfoptions id="group-offloading">
<hfoption id="pipeline">

Call [enable_group_offload()](/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.enable_group_offload) on a pipeline.

```py
import torch
from diffusers import CogVideoXPipeline
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video

onload_device = torch.device("cuda")
offload_device = torch.device("cpu")

pipeline = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
pipeline.enable_group_offload(
    onload_device=onload_device,
    offload_device=offload_device,
    offload_type="leaf_level",
    use_stream=True
)

prompt = (
    "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
    "The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
    "pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
    "casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
    "The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
    "atmosphere of this unique musical performance."
)
video = pipeline(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
export_to_video(video, "output.mp4", fps=8)
```

</hfoption>
<hfoption id="model">

Call [enable_group_offload()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.enable_group_offload) on standard Diffusers model components that inherit from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin). For other model components that don't inherit from [ModelMixin](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin), such as a generic [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), use [apply_group_offloading()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_group_offloading) instead.

```py
import torch
from diffusers import CogVideoXPipeline
from diffusers.hooks import apply_group_offloading
from diffusers.utils import export_to_video

onload_device = torch.device("cuda")
offload_device = torch.device("cpu")
pipeline = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)

# Use the enable_group_offload method for Diffusers model implementations
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level")
pipeline.vae.enable_group_offload(onload_device=onload_device, offload_type="leaf_level")

# Use the apply_group_offloading method for other model components
apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="block_level", num_blocks_per_group=2)

prompt = (
    "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
    "The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
    "pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
    "casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
    "The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
    "atmosphere of this unique musical performance."
)
video = pipeline(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
export_to_video(video, "output.mp4", fps=8)
```

</hfoption>
</hfoptions>

#### CUDA stream

The `use_stream` parameter can be activated for CUDA devices that support asynchronous data transfer streams to reduce overall execution time compared to [CPU offloading](#cpu-offloading). It overlaps data transfer and computation by using layer prefetching. The next layer to be executed is loaded onto the GPU while the current layer is still being executed. It can increase CPU memory significantly so ensure you have 2x the amount of memory as the model size.

Set `record_stream=True` for more of a speedup at the cost of slightly increased memory usage. Refer to the [torch.Tensor.record_stream](https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html) docs to learn more.

> [!TIP]
> When `use_stream=True` on VAEs with tiling enabled, make sure to do a dummy forward pass (possible with dummy inputs as well) before inference to avoid device mismatch errors. This may not work on all implementations, so feel free to open an issue if you encounter any problems.

If you're using `block_level` group offloading with `use_stream` enabled, the `num_blocks_per_group` parameter should be set to `1`, otherwise a warning will be raised.

```py
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", use_stream=True, record_stream=True)
```

The `low_cpu_mem_usage` parameter can be set to `True` to reduce CPU memory usage when using streams during group offloading. It is best for `leaf_level` offloading and when CPU memory is bottlenecked. Memory is saved by creating pinned tensors on the fly instead of pre-pinning them. However, this may increase overall execution time.

#### Offloading to disk

Group offloading can consume significant system memory depending on the model size. On systems with limited memory, try group offloading onto the disk as a secondary memory.

Set the `offload_to_disk_path` argument in either [enable_group_offload()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.enable_group_offload) or [apply_group_offloading()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_group_offloading) to offload the model to the disk.

```py
pipeline.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type="leaf_level", offload_to_disk_path="path/to/disk")

apply_group_offloading(pipeline.text_encoder, onload_device=onload_device, offload_type="block_level", num_blocks_per_group=2, offload_to_disk_path="path/to/disk")
```

Refer to these [two](https://github.com/huggingface/diffusers/pull/11682#issue-3129365363) [tables](https://github.com/huggingface/diffusers/pull/11682#issuecomment-2955715126) to compare the speed and memory trade-offs.

## Layerwise casting

> [!TIP]
> Combine layerwise casting with [group offloading](#group-offloading) for even more memory savings.

Layerwise casting stores weights in a smaller data format (for example, `torch.float8_e4m3fn` and `torch.float8_e5m2`) to use less memory and upcasts those weights to a higher precision like `torch.float16` or `torch.bfloat16` for computation. Certain layers (normalization and modulation related weights) are skipped because storing them in fp8 can degrade generation quality.

> [!WARNING]
> Layerwise casting may not work with all models if the forward implementation contains internal typecasting of weights. The current implementation of layerwise casting assumes the forward pass is independent of the weight precision and the input datatypes are always specified in `compute_dtype` (see [here](https://github.com/huggingface/transformers/blob/7f5077e53682ca855afc826162b204ebf809f1f9/src/transformers/models/t5/modeling_t5.py#L294-L299) for an incompatible implementation).
>
> Layerwise casting may also fail on custom modeling implementations with [PEFT](https://huggingface.co/docs/peft/index) layers. There are some checks available but they are not extensively tested or guaranteed to work in all cases.

Call [enable_layerwise_casting()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.enable_layerwise_casting) to set the storage and computation datatypes.

```py
import torch
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
from diffusers.utils import export_to_video

transformer = CogVideoXTransformer3DModel.from_pretrained(
    "THUDM/CogVideoX-5b",
    subfolder="transformer",
    torch_dtype=torch.bfloat16
)
transformer.enable_layerwise_casting(storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16)

pipeline = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b",
    transformer=transformer,
    torch_dtype=torch.bfloat16
).to("cuda")
prompt = (
    "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
    "The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
    "pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
    "casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
    "The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
    "atmosphere of this unique musical performance."
)
video = pipeline(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
export_to_video(video, "output.mp4", fps=8)
```

The [apply_layerwise_casting()](/docs/diffusers/main/en/api/utilities#diffusers.hooks.apply_layerwise_casting) method can also be used if you need more control and flexibility. It can be partially applied to model layers by calling it on specific internal modules. Use the `skip_modules_pattern` or `skip_modules_classes` parameters to specify modules to avoid, such as the normalization and modulation layers.

```python
import torch
from diffusers import CogVideoXTransformer3DModel
from diffusers.hooks import apply_layerwise_casting

transformer = CogVideoXTransformer3DModel.from_pretrained(
    "THUDM/CogVideoX-5b",
    subfolder="transformer",
    torch_dtype=torch.bfloat16
)

# skip the normalization layer
apply_layerwise_casting(
    transformer,
    storage_dtype=torch.float8_e4m3fn,
    compute_dtype=torch.bfloat16,
    skip_modules_classes=["norm"],
    non_blocking=True,
)
```

## torch.channels_last

[torch.channels_last](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html) flips how tensors are stored from `(batch size, channels, height, width)` to `(batch size, heigh, width, channels)`. This aligns the tensors with how the hardware sequentially accesses the tensors stored in memory and avoids skipping around in memory to access the pixel values.

Not all operators currently support the channels-last format and may result in worst performance, but it is still worth trying.

```py
print(pipeline.unet.conv_out.state_dict()["weight"].stride())  # (2880, 9, 3, 1)
pipeline.unet.to(memory_format=torch.channels_last)  # in-place operation
print(
    pipeline.unet.conv_out.state_dict()["weight"].stride()
)  # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works
```

## torch.jit.trace

[torch.jit.trace](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) records the operations a model performs on a sample input and creates a new, optimized representation of the model based on the recorded execution path. During tracing, the model is optimized to reduce overhead from Python and dynamic control flows and operations are fused together for more efficiency. The returned executable or [ScriptFunction](https://pytorch.org/docs/stable/generated/torch.jit.ScriptFunction.html) can be compiled.

```py
import time
import torch
from diffusers import StableDiffusionPipeline
import functools

# torch disable grad
torch.set_grad_enabled(False)

# set variables
n_experiments = 2
unet_runs_per_experiment = 50

# load sample inputs
def generate_inputs():
    sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16)
    timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999
    encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16)
    return sample, timestep, encoder_hidden_states


pipeline = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    use_safetensors=True,
).to("cuda")
unet = pipeline.unet
unet.eval()
unet.to(memory_format=torch.channels_last)  # use channels_last memory format
unet.forward = functools.partial(unet.forward, return_dict=False)  # set return_dict=False as default

# warmup
for _ in range(3):
    with torch.inference_mode():
        inputs = generate_inputs()
        orig_output = unet(*inputs)

# trace
print("tracing..")
unet_traced = torch.jit.trace(unet, inputs)
unet_traced.eval()
print("done tracing")

# warmup and optimize graph
for _ in range(5):
    with torch.inference_mode():
        inputs = generate_inputs()
        orig_output = unet_traced(*inputs)

# benchmarking
with torch.inference_mode():
    for _ in range(n_experiments):
        torch.cuda.synchronize()
        start_time = time.time()
        for _ in range(unet_runs_per_experiment):
            orig_output = unet_traced(*inputs)
        torch.cuda.synchronize()
        print(f"unet traced inference took {time.time() - start_time:.2f} seconds")
    for _ in range(n_experiments):
        torch.cuda.synchronize()
        start_time = time.time()
        for _ in range(unet_runs_per_experiment):
            orig_output = unet(*inputs)
        torch.cuda.synchronize()
        print(f"unet inference took {time.time() - start_time:.2f} seconds")

# save the model
unet_traced.save("unet_traced.pt")
```

Replace the pipeline's UNet with the traced version.

```py
import torch
from diffusers import StableDiffusionPipeline
from dataclasses import dataclass

@dataclass
class UNet2DConditionOutput:
    sample: torch.Tensor

pipeline = StableDiffusionPipeline.from_pretrained(
    "stable-diffusion-v1-5/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    use_safetensors=True,
).to("cuda")

# use jitted unet
unet_traced = torch.jit.load("unet_traced.pt")

# del pipeline.unet
class TracedUNet(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.in_channels = pipe.unet.config.in_channels
        self.device = pipe.unet.device

    def forward(self, latent_model_input, t, encoder_hidden_states):
        sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0]
        return UNet2DConditionOutput(sample=sample)

pipeline.unet = TracedUNet()

with torch.inference_mode():
    image = pipe([prompt] * 1, num_inference_steps=50).images[0]
```

## Memory-efficient attention

> [!TIP]
> Memory-efficient attention optimizes for memory usage *and* [inference speed](./fp16#scaled-dot-product-attention)!

The Transformers attention mechanism is memory-intensive, especially for long sequences, so you can try using different and more memory-efficient attention types.

By default, if PyTorch >= 2.0 is installed, [scaled dot-product attention (SDPA)](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) is used. You don't need to make any additional changes to your code.

SDPA supports [FlashAttention](https://github.com/Dao-AILab/flash-attention) and [xFormers](https://github.com/facebookresearch/xformers) as well as a native C++ PyTorch implementation. It automatically selects the most optimal implementation based on your input.

You can explicitly use xFormers with the [enable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.enable_xformers_memory_efficient_attention) method.

```py
# pip install xformers
import torch
from diffusers import StableDiffusionXLPipeline

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
).to("cuda")
pipeline.enable_xformers_memory_efficient_attention()
```

Call [disable_xformers_memory_efficient_attention()](/docs/diffusers/main/en/api/models/overview#diffusers.ModelMixin.disable_xformers_memory_efficient_attention) to disable it.

```py
pipeline.disable_xformers_memory_efficient_attention()
```

<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/memory.md" />

### OpenVINO
https://huggingface.co/docs/diffusers/main/optimization/open_vino.md

# OpenVINO

🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the [full list](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) of supported devices).

You'll need to install 🤗 Optimum Intel with the `--upgrade-strategy eager` option to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is using the latest version:

```bash
pip install --upgrade-strategy eager optimum["openvino"]
```

This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO.

## Stable Diffusion

To load and run inference, use the `OVStableDiffusionPipeline`. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set `export=True`:

```python
from optimum.intel import OVStableDiffusionPipeline

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Rembrandt"
image = pipeline(prompt).images[0]

# Don't forget to save the exported model
pipeline.save_pretrained("openvino-sd-v1-5")
```

To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again.

```python
# Define the shapes related to the inputs and desired outputs
batch_size, num_images, height, width = 1, 1, 512, 512

# Statically reshape the model
pipeline.reshape(batch_size, height, width, num_images)
# Compile the model before inference
pipeline.compile()

image = pipeline(
    prompt,
    height=height,
    width=width,
    num_images_per_prompt=num_images,
).images[0]
```
<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/stable_diffusion_v1_5_sail_boat_rembrandt.png">
</div>

You can find more examples in the 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion), and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting.

## Stable Diffusion XL

To load and run inference with SDXL, use the `OVStableDiffusionXLPipeline`:

```python
from optimum.intel import OVStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Rembrandt"
image = pipeline(prompt).images[0]
```

To further speed-up inference, [statically reshape](#stable-diffusion) the model as shown in the Stable Diffusion section.

You can find more examples in the 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl), and running SDXL in OpenVINO is supported for text-to-image and image-to-image.


<EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/optimization/open_vino.md" />
