librarian-bot commited on
Commit
151a632
·
verified ·
1 Parent(s): 6c3371f

Scheduled Commit

Browse files
data/2503.09368.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09368", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HDCompression: Hybrid-Diffusion Image Compression for Ultra-Low Bitrates](https://huggingface.co/papers/2502.07160) (2025)\n* [Taming Large Multimodal Agents for Ultra-low Bitrate Semantically Disentangled Image Compression](https://huggingface.co/papers/2503.00399) (2025)\n* [DLF: Extreme Image Compression with Dual-generative Latent Fusion](https://huggingface.co/papers/2503.01428) (2025)\n* [Hierarchical Semantic Compression for Consistent Image Semantic Restoration](https://huggingface.co/papers/2502.16799) (2025)\n* [Compact Latent Representation for Image Compression (CLRIC)](https://huggingface.co/papers/2502.14937) (2025)\n* [Compressed Image Generation with Denoising Diffusion Codebook Models](https://huggingface.co/papers/2502.01189) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09642.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09642", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [TinyLLaVA-Video: A Simple Framework of Small-scale Large Multimodal Models for Video Understanding](https://huggingface.co/papers/2501.15513) (2025)\n* [Magic 1-For-1: Generating One Minute Video Clips within One Minute](https://huggingface.co/papers/2502.07701) (2025)\n* [What Are You Doing? A Closer Look at Controllable Human Video Generation](https://huggingface.co/papers/2503.04666) (2025)\n* [GVMGen: A General Video-to-Music Generation Model with Hierarchical Attentions](https://huggingface.co/papers/2501.09972) (2025)\n* [GenVidBench: A Challenging Benchmark for Detecting AI-Generated Video](https://huggingface.co/papers/2501.11340) (2025)\n* [CascadeV: An Implementation of Wurstchen Architecture for Video Generation](https://huggingface.co/papers/2501.16612) (2025)\n* [Content-Rich AIGC Video Quality Assessment via Intricate Text Alignment and Motion-Aware Consistency](https://huggingface.co/papers/2502.04076) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09669.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09669", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models](https://huggingface.co/papers/2502.16167) (2025)\n* [Jailbreaking Safeguarded Text-to-Image Models via Large Language Models](https://huggingface.co/papers/2503.01839) (2025)\n* [A Comprehensive Survey on Concept Erasure in Text-to-Image Diffusion Models](https://huggingface.co/papers/2502.14896) (2025)\n* [GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors](https://huggingface.co/papers/2503.03944) (2025)\n* [Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on Retrieval-Augmented Diffusion Models](https://huggingface.co/papers/2501.13340) (2025)\n* [TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models](https://huggingface.co/papers/2503.07389) (2025)\n* [Gungnir: Exploiting Stylistic Features in Images for Backdoor Attacks on Diffusion Models](https://huggingface.co/papers/2502.20650) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.09799.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.09799", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Predictable Scale: Part I -- Optimal Hyperparameter Scaling Law in Large Language Model Pretraining](https://huggingface.co/papers/2503.04715) (2025)\n* [Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models](https://huggingface.co/papers/2501.12370) (2025)\n* [The Journey Matters: Average Parameter Count over Pre-training Unifies Sparse and Dense Scaling Laws](https://huggingface.co/papers/2501.12486) (2025)\n* [Scaling Inference-Efficient Language Models](https://huggingface.co/papers/2501.18107) (2025)\n* [Muon is Scalable for LLM Training](https://huggingface.co/papers/2502.16982) (2025)\n* [(Mis)Fitting: A Survey of Scaling Laws](https://huggingface.co/papers/2502.18969) (2025)\n* [LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws](https://huggingface.co/papers/2502.12120) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10365.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10365", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Personalized Image Generation with Deep Generative Models: A Decade Survey](https://huggingface.co/papers/2502.13081) (2025)\n* [IP-Composer: Semantic Composition of Visual Concepts](https://huggingface.co/papers/2502.13951) (2025)\n* [CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance](https://huggingface.co/papers/2503.10391) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space](https://huggingface.co/papers/2501.12224) (2025)\n* [RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models](https://huggingface.co/papers/2503.10406) (2025)\n* [Towards More Accurate Personalized Image Generation: Addressing Overfitting and Evaluation Bias](https://huggingface.co/papers/2503.06632) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10391.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10391", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers](https://huggingface.co/papers/2502.06527) (2025)\n* [EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion](https://huggingface.co/papers/2501.13452) (2025)\n* [Goku: Flow Based Video Generative Foundation Models](https://huggingface.co/papers/2502.04896) (2025)\n* [Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating Videos](https://huggingface.co/papers/2502.21314) (2025)\n* [HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation](https://huggingface.co/papers/2502.04847) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [Dynamic Concepts Personalization from Single Videos](https://huggingface.co/papers/2502.14844) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10437.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10437", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Online Language Splatting](https://huggingface.co/papers/2503.09447) (2025)\n* [SplatTalk: 3D VQA with Gaussian Splatting](https://huggingface.co/papers/2503.06271) (2025)\n* [DIV-FF: Dynamic Image-Video Feature Fields For Environment Understanding in Egocentric Videos](https://huggingface.co/papers/2503.08344) (2025)\n* [DynamicGSG: Dynamic 3D Gaussian Scene Graphs for Environment Adaptation](https://huggingface.co/papers/2502.15309) (2025)\n* [DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation](https://huggingface.co/papers/2503.04006) (2025)\n* [Open-Vocabulary Semantic Part Segmentation of 3D Human](https://huggingface.co/papers/2502.19782) (2025)\n* [LIFT-GS: Cross-Scene Render-Supervised Distillation for 3D Language Grounding](https://huggingface.co/papers/2502.20389) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10460.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10460", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948) (2025)\n* [Challenges in Ensuring AI Safety in DeepSeek-R1 Models: The Shortcomings of Reinforcement Learning Strategies](https://huggingface.co/papers/2501.17030) (2025)\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [Demystifying Long Chain-of-Thought Reasoning in LLMs](https://huggingface.co/papers/2502.03373) (2025)\n* [LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!](https://huggingface.co/papers/2502.07374) (2025)\n* [R1-Zero's\"Aha Moment\"in Visual Reasoning on a 2B Non-SFT Model](https://huggingface.co/papers/2503.05132) (2025)\n* [LIMR: Less is More for RL Scaling](https://huggingface.co/papers/2502.11886) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10568.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10568", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Taming Teacher Forcing for Masked Autoregressive Video Generation](https://huggingface.co/papers/2501.12389) (2025)\n* [FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction](https://huggingface.co/papers/2502.20313) (2025)\n* [RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models](https://huggingface.co/papers/2503.10406) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n* [Autoregressive Image Generation with Vision Full-view Prompt](https://huggingface.co/papers/2502.16965) (2025)\n* [Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment](https://huggingface.co/papers/2503.07334) (2025)\n* [LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization](https://huggingface.co/papers/2503.08619) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10596.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10596", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Cross-Domain Semantic Segmentation with Large Language Model-Assisted Descriptor Generation](https://huggingface.co/papers/2501.16467) (2025)\n* [SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories](https://huggingface.co/papers/2503.08625) (2025)\n* [PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?](https://huggingface.co/papers/2502.04192) (2025)\n* [YOLOE: Real-Time Seeing Anything](https://huggingface.co/papers/2503.07465) (2025)\n* [Segment Anything, Even Occluded](https://huggingface.co/papers/2503.06261) (2025)\n* [DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation](https://huggingface.co/papers/2503.04006) (2025)\n* [COCONut-PanCap: Joint Panoptic Segmentation and Grounded Captions for Fine-Grained Understanding and Generation](https://huggingface.co/papers/2502.02589) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10615.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10615", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL](https://huggingface.co/papers/2503.07536) (2025)\n* [VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search](https://huggingface.co/papers/2503.10582) (2025)\n* [MindGYM: Enhancing Vision-Language Models via Synthetic Self-Challenging Questions](https://huggingface.co/papers/2503.09499) (2025)\n* [Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?](https://huggingface.co/papers/2503.06252) (2025)\n* [Boosting the Generalization and Reasoning of Vision Language Models with Curriculum Reinforcement Learning](https://huggingface.co/papers/2503.07065) (2025)\n* [MV-MATH: Evaluating Multimodal Math Reasoning in Multi-Visual Contexts](https://huggingface.co/papers/2502.20808) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10633.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10633", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Evaluating and Predicting Distorted Human Body Parts for Generated Images](https://huggingface.co/papers/2503.00811) (2025)\n* [CADDreamer: CAD object Generation from Single-view Images](https://huggingface.co/papers/2502.20732) (2025)\n* [MagicArticulate: Make Your 3D Models Articulation-Ready](https://huggingface.co/papers/2502.12135) (2025)\n* [Joint Learning of Depth and Appearance for Portrait Image Animation](https://huggingface.co/papers/2501.08649) (2025)\n* [Towards Consistent and Controllable Image Synthesis for Face Editing](https://huggingface.co/papers/2502.02465) (2025)\n* [AnyTop: Character Animation Diffusion with Any Topology](https://huggingface.co/papers/2502.17327) (2025)\n* [Efficient Portrait Matte Creation With Layer Diffusion and Connectivity Priors](https://huggingface.co/papers/2501.16147) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10635.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10635", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks](https://huggingface.co/papers/2503.04833) (2025)\n* [MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained Models](https://huggingface.co/papers/2502.08079) (2025)\n* [SAP-DIFF: Semantic Adversarial Patch Generation for Black-Box Face Recognition Models via Diffusion Models](https://huggingface.co/papers/2502.19710) (2025)\n* [Improving Adversarial Transferability in MLLMs via Dynamic Vision-Language Alignment Attack](https://huggingface.co/papers/2502.19672) (2025)\n* [Breaking the Limits of Quantization-Aware Defenses: QADT-R for Robustness Against Patch-Based Adversarial Attacks in QNNs](https://huggingface.co/papers/2503.07058) (2025)\n* [Adversary-Aware DPO: Enhancing Safety Alignment in Vision Language Models via Adversarial Training](https://huggingface.co/papers/2502.11455) (2025)\n* [Universal Adversarial Attack on Aligned Multimodal LLMs](https://huggingface.co/papers/2502.07987) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10636.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10636", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Designing a Conditional Prior Distribution for Flow-Based Generative Models](https://huggingface.co/papers/2502.09611) (2025)\n* [Block Flow: Learning Straight Flow on Data Blocks](https://huggingface.co/papers/2501.11361) (2025)\n* [Efficient Distillation of Classifier-Free Guidance using Adapters](https://huggingface.co/papers/2503.07274) (2025)\n* [Towards Hierarchical Rectified Flow](https://huggingface.co/papers/2502.17436) (2025)\n* [Probabilistic Forecasting via Autoregressive Flow Matching](https://huggingface.co/papers/2503.10375) (2025)\n* [SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations](https://huggingface.co/papers/2502.02472) (2025)\n* [Training Consistency Models with Variational Noise Coupling](https://huggingface.co/papers/2502.18197) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10637.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10637", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SliderSpace: Decomposing the Visual Capabilities of Diffusion Models](https://huggingface.co/papers/2502.01639) (2025)\n* [Adding Additional Control to One-Step Diffusion with Joint Distribution Matching](https://huggingface.co/papers/2503.06652) (2025)\n* [EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer](https://huggingface.co/papers/2503.07027) (2025)\n* [SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation](https://huggingface.co/papers/2503.09641) (2025)\n* [Underlying Semantic Diffusion for Effective and Efficient In-Context Learning](https://huggingface.co/papers/2503.04050) (2025)\n* [Accelerate High-Quality Diffusion Models with Inner Loop Feedback](https://huggingface.co/papers/2501.13107) (2025)\n* [Investigating and Improving Counter-Stereotypical Action Relation in Text-to-Image Diffusion Models](https://huggingface.co/papers/2503.10037) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10638.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10638", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Diffusion Models without Classifier-free Guidance](https://huggingface.co/papers/2502.12154) (2025)\n* [Efficient Distillation of Classifier-Free Guidance using Adapters](https://huggingface.co/papers/2503.07274) (2025)\n* [Visual Generation Without Guidance](https://huggingface.co/papers/2501.15420) (2025)\n* [Classifier-free Guidance with Adaptive Scaling](https://huggingface.co/papers/2502.10574) (2025)\n* [Beyond and Free from Diffusion: Invertible Guided Consistency Training](https://huggingface.co/papers/2502.05391) (2025)\n* [History-Guided Video Diffusion](https://huggingface.co/papers/2502.06764) (2025)\n* [Understanding Classifier-Free Guidance: High-Dimensional Theory and Non-Linear Generalizations](https://huggingface.co/papers/2502.07849) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2503.10639.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2503.10639", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing](https://huggingface.co/papers/2502.21291) (2025)\n* [WeGen: A Unified Model for Interactive Multimodal Generation as We Chat](https://huggingface.co/papers/2503.01115) (2025)\n* [MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation](https://huggingface.co/papers/2503.01298) (2025)\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [ComposeAnyone: Controllable Layout-to-Human Generation with Decoupled Multimodal Conditions](https://huggingface.co/papers/2501.12173) (2025)\n* [RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models](https://huggingface.co/papers/2503.10406) (2025)\n* [I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models](https://huggingface.co/papers/2502.10458) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}