--- base_model: Alpha-VLLM/Lumina-Image-2.0 library_name: diffusers license: apache-2.0 instance_prompt: gh1b11 style widget: - text: howl phogh1b11 style output: url: image_0.png - text: howl phogh1b11 style output: url: image_1.png - text: howl phogh1b11 style output: url: image_2.png - text: howl phogh1b11 style output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - lumina2 - lumina2-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - lora - lumina2 - lumina2-diffusers - template:sd-lora --- # Lumina2 DreamBooth LoRA - Dino-LeeTaeHun/lumina2-ghibli-lora ## Model description These are Dino-LeeTaeHun/lumina2-ghibli-lora DreamBooth LoRA weights for Alpha-VLLM/Lumina-Image-2.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Lumina2 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_lumina2.md). ## Trigger words You should use `gh1b11 style` to trigger the image generation. The following `system_prompt` was also used used during training (ignore if `None`): None. ## Download model [Download the *.safetensors LoRA](Dino-LeeTaeHun/lumina2-ghibli-lora/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py TODO ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]