FLUX.1-dev LoRA Collection
A curated collection of Low-Rank Adaptation (LoRA) models for FLUX.1-dev, enabling lightweight fine-tuning and style adaptation for text-to-image generation.
Model Description
This repository serves as an organized storage for FLUX.1-dev LoRA adapters. LoRAs are lightweight model adaptations that modify the behavior of the base FLUX.1-dev model without requiring full model retraining. They enable:
- Style Transfer: Apply artistic styles and aesthetic transformations
- Concept Learning: Teach the model specific subjects, characters, or objects
- Quality Enhancement: Improve specific aspects like detail, lighting, or composition
- Domain Adaptation: Specialize the model for specific use cases (e.g., architecture, portraits, landscapes)
LoRAs are significantly smaller than full models (typically 10-500MB vs 20GB+), making them efficient for storage, sharing, and experimentation.
Repository Contents
flux-dev-loras/
βββ README.md (10.7KB)
βββ loras/
βββ flux/
βββ (LoRA .safetensors files will be stored here)
Current Status: Repository structure initialized, ready for LoRA model storage.
Typical LoRA File Sizes:
- Small LoRAs (rank 4-16): 10-50 MB
- Medium LoRAs (rank 32-64): 50-200 MB
- Large LoRAs (rank 128+): 200-500 MB
Total Repository Size: ~14 KB (structure initialized, ready for LoRA population)
Hardware Requirements
LoRA models add minimal overhead to base FLUX.1-dev requirements:
Minimum Requirements
- VRAM: 12GB (base FLUX.1-dev requirement)
- RAM: 16GB system memory
- Disk Space: Variable depending on LoRA collection size
- Base model: ~24GB (FP16) or ~12GB (FP8)
- Per LoRA: 10-500MB typically
- GPU: NVIDIA RTX 3060 (12GB) or better
Recommended Requirements
- VRAM: 24GB (RTX 4090, RTX A5000)
- RAM: 32GB system memory
- Disk Space: 50-100GB for extensive LoRA collection
- GPU: NVIDIA RTX 4090 or RTX 5090 for fastest inference
Performance Notes
- LoRAs add minimal computational overhead (<5% typically)
- Multiple LoRAs can be stacked (with performance trade-offs)
- FP8 base models are compatible with FP16 LoRAs
Usage Examples
Basic LoRA Loading with Diffusers
from diffusers import FluxPipeline
import torch
# Load base FLUX.1-dev model
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
# Load LoRA adapter (example path - adjust to your actual LoRA file)
pipe.load_lora_weights("E:/huggingface/flux-dev-loras/loras/flux/your-lora-name.safetensors")
# Generate image with LoRA applied
prompt = "a beautiful landscape in the style of the LoRA"
image = pipe(
prompt=prompt,
num_inference_steps=50,
guidance_scale=7.5,
height=1024,
width=1024
).images[0]
image.save("output.png")
Multiple LoRA Stacking
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
# Load multiple LoRAs with different strengths
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/style-lora.safetensors",
adapter_name="style"
)
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/detail-lora.safetensors",
adapter_name="detail"
)
# Set adapter weights
pipe.set_adapters(["style", "detail"], adapter_weights=[0.8, 0.5])
# Generate with combined LoRA effects
image = pipe(
prompt="a detailed portrait with artistic style",
num_inference_steps=50
).images[0]
image.save("combined_output.png")
Dynamic LoRA Weight Adjustment
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(
"E:/huggingface/flux-dev-loras/loras/flux/artistic-style.safetensors"
)
# Generate with different LoRA strengths
for strength in [0.3, 0.6, 1.0]:
pipe.fuse_lora(lora_scale=strength)
image = pipe(
prompt="a mountain landscape",
num_inference_steps=50
).images[0]
image.save(f"output_strength_{strength}.png")
# Unfuse before changing strength
pipe.unfuse_lora()
ComfyUI Integration
LoRAs in this directory can be used directly in ComfyUI:
Automatic Detection: Place LoRAs in ComfyUI's
models/loras/directory, or create a symlink:mklink /D "ComfyUI\models\loras\flux-dev-loras" "E:\huggingface\flux-dev-loras\loras\flux"Load in Workflow: Use the "Load LoRA" node with FLUX.1-dev checkpoint
Adjust Strength: Use the strength parameter (0.0-1.0) to control LoRA influence
Model Specifications
Base Model Compatibility
- Model: FLUX.1-dev by Black Forest Labs
- Architecture: Latent diffusion transformer
- Compatible Precisions: FP16, BF16, FP8 (E4M3)
LoRA Format
- Format: SafeTensors (.safetensors)
- Typical Ranks: 4, 8, 16, 32, 64, 128
- Training Method: Low-Rank Adaptation (LoRA)
Supported Libraries
- diffusers (β₯0.30.0 recommended)
- ComfyUI
- InvokeAI
- Automatic1111 (with FLUX support)
Finding and Adding LoRAs
Recommended Sources
- Hugging Face Hub: https://huggingface.co/models?pipeline_tag=text-to-image&other=flux&other=lora
- CivitAI: https://civitai.com/ (filter for FLUX.1-dev LoRAs)
- Replicate: Community-trained FLUX LoRAs
Download Process
# Example: Download LoRA from Hugging Face
cd E:\huggingface\flux-dev-loras\loras\flux
huggingface-cli download username/lora-repo --local-dir .
Organization Tips
- Use descriptive filenames:
style-artistic-painting.safetensors - Group by category:
style/,character/,concept/,quality/ - Include metadata files (
.json) with training details when available
Performance Tips and Optimization
Memory Optimization
- Use FP8 Base Model: Load FLUX.1-dev in FP8 to save ~12GB VRAM
- Sequential Loading: Load/unload LoRAs as needed instead of keeping all loaded
- CPU Offload: Use
enable_model_cpu_offload()for VRAM-constrained systems
pipe.enable_model_cpu_offload()
Quality Optimization
- LoRA Strength Tuning: Start with 0.7-0.8 strength, adjust based on results
- Inference Steps: LoRAs work well with 30-50 steps (same as base model)
- Guidance Scale: Use 7.0-8.0 for balanced results with LoRAs
Training Your Own LoRAs
- Recommended Tools: Kohya_ss, SimpleTuner, ai-toolkit
- Dataset Size: 10-50 high-quality images for concept learning
- Rank Selection: Rank 16-32 for most use cases, higher for complex styles
- Training Steps: 1000-5000 depending on complexity and dataset size
License
LoRA Models: Individual LoRAs may have different licenses. Check each LoRA's source repository for specific licensing terms.
Base Model License: FLUX.1-dev uses the Black Forest Labs FLUX.1-dev Community License
- Commercial use allowed with restrictions
- See: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
Repository Structure: Apache 2.0 (this organizational structure)
Citation
If you use FLUX.1-dev LoRAs in your work, please cite the base model:
@software{flux1_dev,
author = {Black Forest Labs},
title = {FLUX.1-dev},
year = {2024},
url = {https://huggingface.co/black-forest-labs/FLUX.1-dev}
}
For specific LoRAs, cite the original creators from their respective repositories.
Resources and Links
Official FLUX Resources
- Base Model: https://huggingface.co/black-forest-labs/FLUX.1-dev
- Black Forest Labs: https://blackforestlabs.ai/
- FLUX Documentation: https://github.com/black-forest-labs/flux
LoRA Training Resources
- Kohya_ss Trainer: https://github.com/bmaltais/kohya_ss
- SimpleTuner: https://github.com/bghira/SimpleTuner
- ai-toolkit: https://github.com/ostris/ai-toolkit
Community and Support
- Hugging Face Diffusers Docs: https://huggingface.co/docs/diffusers
- FLUX Discord Communities
- r/StableDiffusion (Reddit)
Model Discovery
- Hugging Face FLUX LoRAs: https://huggingface.co/models?other=flux&other=lora
- CivitAI FLUX Section: https://civitai.com/models?modelType=LORA&baseModel=FLUX.1%20D
Changelog
v1.4 (2025-10-28)
- Updated hardware recommendations with RTX 5090 reference
- Refreshed repository size information (14 KB)
- Updated last modified date to current (2025-10-28)
- Verified all YAML frontmatter compliance with HuggingFace standards
- Confirmed repository structure and organization remain current
v1.3 (2024-10-14)
- CRITICAL FIX: Moved version header AFTER YAML frontmatter (HuggingFace requirement)
- Verified YAML frontmatter is first content in file
- Confirmed proper YAML structure with three-dash delimiters
- All metadata fields validated against HuggingFace standards
v1.2 (2024-10-14)
- Updated version metadata to v1.2
- Verified repository structure and file organization
- Updated repository size information
- Confirmed YAML frontmatter compliance with HuggingFace standards
v1.1 (2024-10-13)
- Updated version metadata to v1.1
- Enhanced tag metadata with
low-rank-adaptation - Improved hardware requirements formatting with subsections
- Added changelog section for version tracking
v1.0 (Initial Release)
- Initial repository structure and documentation
- Comprehensive usage examples for diffusers and ComfyUI
- Performance optimization guidelines
- LoRA training and discovery resources
Repository Status: Initialized and ready for LoRA collection Last Updated: 2025-10-28 Maintained By: Local collection for FLUX.1-dev experimentation
- Downloads last month
- -