ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs Paper • 2510.04767 • Published 21 days ago • 26
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization Paper • 2508.10395 • Published Aug 14 • 42
UNCAGE: Contrastive Attention Guidance for Masked Generative Transformers in Text-to-Image Generation Paper • 2508.05399 • Published Aug 7 • 16
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space Models Paper • 2503.03499 • Published Mar 5 • 5