gaotang's picture
Create README.md
a3763db verified
|
raw
history blame
1.54 kB
metadata
license: mit
language:
  - en
base_model:
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

image/png

🚀 Can we cast reward modeling as a reasoning task?

RM-R1 is a training framework for Reasoning Reward Model (ReasRM) that judges two candidate answers by first thinking out loud—generating rubrics or reasoning traces—then emitting its preference.
Compared with prior scalar or vanilla generative reward models, RM-R1 delivers up to +13.8 % absolute accuracy gains on public reward model benchmarks while providing fully interpretable critiques.

TL;DR

  • Two-stage training

    1. Distillation of ~8.7 K high-quality reasoning traces (Chain-of-Rubrics).
    2. Reinforcement Learning with Verifiable Rewards (RLVR) on ~64 K preference pairs.
  • Backbones released: 7 B / 14 B / 32 B Qwen-2.5-Instruct variants + DeepSeek-distilled checkpoints.

Intended uses

  • RLHF / RLAIF: plug-and-play reward function for policy optimisation.
  • Automated evaluation: LLM-as-a-judge for open-domain QA, chat, and reasoning.
  • Research: study process supervision, chain-of-thought verification, or rubric generation.