Hybrid Inference
Empowering local AI builders with Hybrid Inference
Hybrid Inference is an experimental feature.
Feedback can be provided here.
Why use Hybrid Inference?
Hybrid Inference offers a fast and simple way to offload local generation requirements.
- π Reduced Requirements: Access powerful models without expensive hardware.
- π Without Compromise: Achieve the highest quality without sacrificing performance.
- π° Cost Effective: Itβs free! π€
- π― Diverse Use Cases: Fully compatible with Diffusers 𧨠and the wider community.
- π§ Developer-Friendly: Simple requests, fast responses.
Available Models
- VAE Decode πΌοΈ: Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
- VAE Encode π’: Efficiently encode images into latent representations for generation and training.
- Text Encoders π (coming soon): Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.
Integrations
Changelog
- March 10 2025: Added VAE encode
- March 2 2025: Initial release with VAE decoding
Contents
The documentation is organized into three sections:
- VAE Decode Learn the basics of how to use VAE Decode with Hybrid Inference.
- VAE Encode Learn the basics of how to use VAE Encode with Hybrid Inference.
- API Reference Dive into task-specific settings and parameters.
Update on GitHub