- 
	
	
	
Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization
Paper • 2508.07629 • Published • 41 - 
	
	
	
Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning
Paper • 2508.07101 • Published • 13 - 
	
	
	
Compressing Chain-of-Thought in LLMs via Step Entropy
Paper • 2508.03346 • Published • 7 - 
	
	
	
Train Long, Think Short: Curriculum Learning for Efficient Reasoning
Paper • 2508.08940 • Published • 26 
Collections
Discover the best community collections!
Collections including paper arxiv:2508.17445 
						
					
				- 
	
	
	
TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling
Paper • 2508.17445 • Published • 80 - 
	
	
	
				m-a-p/TreePO-Qwen2.5-7B
Text Generation • 8B • Updated • 81 • 2 - 
	
	
	
m-a-p/TreePO_data
Viewer • Updated • 3.12k • 264 - 
	
	
	
				m-a-p/TreePO-Qwen2.5-7B_fixed-div
8B • Updated • 66 
- 
	
	
	
				lusxvr/nanoVLM-222M
Image-Text-to-Text • 0.2B • Updated • 260 • 97 - 
	
	
	
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Paper • 2503.09516 • Published • 36 - 
	
	
	
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
Paper • 2505.24863 • Published • 97 - 
	
	
	
QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning
Paper • 2505.17667 • Published • 88 
- 
	
	
	
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 - 
	
	
	
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 30 - 
	
	
	
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 123 - 
	
	
	
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4 
- 
	
	
	
Diffusion Augmented Agents: A Framework for Efficient Exploration and Transfer Learning
Paper • 2407.20798 • Published • 24 - 
	
	
	
Offline Reinforcement Learning for LLM Multi-Step Reasoning
Paper • 2412.16145 • Published • 38 - 
	
	
	
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
Paper • 2501.03262 • Published • 102 - 
	
	
	
SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution
Paper • 2502.18449 • Published • 75 
- 
	
	
	
Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning
Paper • 2508.20751 • Published • 89 - 
	
	
	
TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling
Paper • 2508.17445 • Published • 80 - 
	
	
	
VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space
Paper • 2508.19247 • Published • 41 - 
	
	
	
VibeVoice Technical Report
Paper • 2508.19205 • Published • 123 
- 
	
	
	
Seed-Coder: Let the Code Model Curate Data for Itself
Paper • 2506.03524 • Published • 6 - 
	
	
	
Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning
Paper • 2504.13914 • Published • 4 - 
	
	
	
FlowTok: Flowing Seamlessly Across Text and Image Tokens
Paper • 2503.10772 • Published • 19 - 
	
	
	
UVE: Are MLLMs Unified Evaluators for AI-Generated Videos?
Paper • 2503.09949 • Published • 5 
- 
	
	
	
Nuclear Norm Regularization for Deep Learning
Paper • 2405.14544 • Published • 1 - 
	
	
	
Token embeddings violate the manifold hypothesis
Paper • 2504.01002 • Published • 1 - 
	
	
	
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Paper • 2403.10476 • Published • 1 - 
	
	
	
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning
Paper • 2504.00254 • Published • 1 
- 
	
	
	
PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment
Paper • 2410.13785 • Published • 19 - 
	
	
	
Aligning Large Language Models via Self-Steering Optimization
Paper • 2410.17131 • Published • 24 - 
	
	
	
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 51 - 
	
	
	
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation
Paper • 2410.14745 • Published • 47 
- 
	
	
	
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 23 - 
	
	
	
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 - 
	
	
	
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 - 
	
	
	
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 25 
- 
	
	
	
Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization
Paper • 2508.07629 • Published • 41 - 
	
	
	
Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning
Paper • 2508.07101 • Published • 13 - 
	
	
	
Compressing Chain-of-Thought in LLMs via Step Entropy
Paper • 2508.03346 • Published • 7 - 
	
	
	
Train Long, Think Short: Curriculum Learning for Efficient Reasoning
Paper • 2508.08940 • Published • 26 
- 
	
	
	
Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning
Paper • 2508.20751 • Published • 89 - 
	
	
	
TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling
Paper • 2508.17445 • Published • 80 - 
	
	
	
VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space
Paper • 2508.19247 • Published • 41 - 
	
	
	
VibeVoice Technical Report
Paper • 2508.19205 • Published • 123 
- 
	
	
	
TreePO: Bridging the Gap of Policy Optimization and Efficacy and Inference Efficiency with Heuristic Tree-based Modeling
Paper • 2508.17445 • Published • 80 - 
	
	
	
				m-a-p/TreePO-Qwen2.5-7B
Text Generation • 8B • Updated • 81 • 2 - 
	
	
	
m-a-p/TreePO_data
Viewer • Updated • 3.12k • 264 - 
	
	
	
				m-a-p/TreePO-Qwen2.5-7B_fixed-div
8B • Updated • 66 
- 
	
	
	
Seed-Coder: Let the Code Model Curate Data for Itself
Paper • 2506.03524 • Published • 6 - 
	
	
	
Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning
Paper • 2504.13914 • Published • 4 - 
	
	
	
FlowTok: Flowing Seamlessly Across Text and Image Tokens
Paper • 2503.10772 • Published • 19 - 
	
	
	
UVE: Are MLLMs Unified Evaluators for AI-Generated Videos?
Paper • 2503.09949 • Published • 5 
- 
	
	
	
				lusxvr/nanoVLM-222M
Image-Text-to-Text • 0.2B • Updated • 260 • 97 - 
	
	
	
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Paper • 2503.09516 • Published • 36 - 
	
	
	
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
Paper • 2505.24863 • Published • 97 - 
	
	
	
QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning
Paper • 2505.17667 • Published • 88 
- 
	
	
	
Nuclear Norm Regularization for Deep Learning
Paper • 2405.14544 • Published • 1 - 
	
	
	
Token embeddings violate the manifold hypothesis
Paper • 2504.01002 • Published • 1 - 
	
	
	
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Paper • 2403.10476 • Published • 1 - 
	
	
	
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model Fine-Tuning
Paper • 2504.00254 • Published • 1 
- 
	
	
	
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 - 
	
	
	
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 30 - 
	
	
	
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 123 - 
	
	
	
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4 
- 
	
	
	
PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment
Paper • 2410.13785 • Published • 19 - 
	
	
	
Aligning Large Language Models via Self-Steering Optimization
Paper • 2410.17131 • Published • 24 - 
	
	
	
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 51 - 
	
	
	
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation
Paper • 2410.14745 • Published • 47 
- 
	
	
	
Diffusion Augmented Agents: A Framework for Efficient Exploration and Transfer Learning
Paper • 2407.20798 • Published • 24 - 
	
	
	
Offline Reinforcement Learning for LLM Multi-Step Reasoning
Paper • 2412.16145 • Published • 38 - 
	
	
	
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
Paper • 2501.03262 • Published • 102 - 
	
	
	
SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution
Paper • 2502.18449 • Published • 75 
- 
	
	
	
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 23 - 
	
	
	
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 - 
	
	
	
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 151 - 
	
	
	
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 25