RLBFF: Binary Flexible Feedback to bridge between Human Feedback & Verifiable Rewards Paper • 2509.21319 • Published Sep 25 • 4
CoT-Self-Instruct: Building high-quality synthetic prompts for reasoning and non-reasoning tasks Paper • 2507.23751 • Published Jul 31 • 4
Evolving LLMs' Self-Refinement Capability via Iterative Preference Optimization Paper • 2502.05605 • Published Feb 8
WarriorCoder: Learning from Expert Battles to Augment Code Large Language Models Paper • 2412.17395 • Published Dec 23, 2024