PursuitOfDataScience/qwen2.5-0.5b-open-r1-mot-cot-sft

This repository contains a two-stage fine-tuned version of Qwen 2.5 0.5B aimed at improved reasoning and chain-of-thought generation:

  1. Supervised fine-tuning (SFT) on HuggingFaceH4/ultrachat_200k to improve dialogue quality and instruction following.
  2. Chain-of-Thought (CoT) finetuning on open-r1/Mixture-of-Thoughts (a Mixture-of-Thoughts dataset) to encourage step-by-step reasoning.
    • During CoT finetuning, assistant replies are prefixed with a short CoT sentinel (e.g. "<think>\n") to teach the model to produce internal reasoning before a concise final answer.

Model details

  • Base model: Qwen 2.5 0.5B (see base_model above)
  • Stage 1 objective: Supervised fine-tuning on Ultrachat-style multi-turn conversations.
  • Stage 2 objective: Supervised Chain-of-Thought style finetuning on open-r1/Mixture-of-Thoughts. The model learns to place an explicit internal reasoning prefix and then continue the reply.
  • Context length: Inferred from base config; tokenization was performed with consistent chat template usage.
  • Training data:
    • SFT: multi-turn dialogues from HuggingFaceH4/ultrachat_200k.
    • CoT: Mixture-of-Thoughts style data from open-r1/Mixture-of-Thoughts.

Inference usage

This model supports both normal conversational behavior and chain-of-thought (CoT) reasoning. Use tokenizer.apply_chat_template to build prompts that match the training format (especially important for CoT prompts).

Example: normal chat usage:

from transformers import AutoTokenizer, AutoModelForCausalLM

repo_id = "PursuitOfDataScience/qwen2.5-0.5b-open-r1-mot-cot-sft"

tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
    repo_id,
    device_map="auto",
)

messages = [
    {
        "role": "system",
        "content": (
            "You are a helpful, concise assistant. "
            "Write clear, well-structured answers that follow the user's constraints."
        ),
    },
    {
        "role": "user",
        "content": "Explain how someone can build a consistent daily learning habit.",
    },
]

prompt_text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
)

generated_tokens = outputs[0][inputs["input_ids"].shape[1]:]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(response)

Example: encourage Chain-of-Thought reasoning (the CoT-friendly usage mirrors the CoT finetuning format):

messages = [
    {
        "role": "system",
        "content": (
            "You are a thoughtful assistant. Provide step-by-step reasoning when relevant, "
            "followed by a concise summary."
        ),
    },
    {
        "role": "user",
        "content": "Explain how someone can build a consistent daily learning habit.",
    },
    # Add the CoT sentinel as an assistant prefix so the model continues with chain-of-thought.
    {
        "role": "assistant",
        "content": "<think>\n",
    },
]

prompt_text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
)

generated_tokens = outputs[0][inputs["input_ids"].shape[1]:]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(response)

Multi-turn CoT example

messages = [
    {
        "role": "system",
        "content": (
            "You are a thoughtful assistant with a preference for process-focused answers. "
            "When helpful, show your chain-of-thought reasoning and finish with a short conclusion."
        ),
    },
    {
        "role": "user",
        "content": "Describe the main trade-offs between using small and large language models.",
    },
    {
        "role": "assistant",
        "content": "<think>\n",
    },
    {
        "role": "user",
        "content": "Give me a bullet-point summary from the perspective of a startup.",
    },
]

prompt_text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    temperature=0.7,
    top_p=0.9,
    do_sample=True,
)
response = tokenizer.decode(
    outputs[0][inputs["input_ids"].shape[1]:],
    skip_special_tokens=True,
)
print(response)

Training pipeline (summary)

  1. Instruction SFT (Ultrachat):
    • Convert multiturn conversations into message lists that match the chat template.
    • For each assistant reply, build a single training example with tokenizer.apply_chat_template.
    • Mask token-level loss to train only on assistant tokens (system/user context is not supervised).
  2. Chain-of-Thought finetuning (open-r1 Mixture-of-Thoughts):
    • Convert CoT preference/train data into the same chat template.
    • Each assistant reply is prefixed with a CoT sentinel (e.g. "<think>\n") so the model learns to produce internal reasoning followed by a concise answer.
    • See cot-finetuning.py for exact hyperparameters and training details.

Limitations

  • This model can hallucinate or provide incorrect reasoning โ€” particularly for complex or long-horizon reasoning tasks.
  • Chain-of-Thought outputs may disclose intermediate reasoning that could contain errors; validate important claims.
  • Verify and guard sensitive or safety-critical use-cases before production deployment.
Downloads last month
11
Safetensors
Model size
0.5B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PursuitOfDataScience/qwen2.5-0.5b-open-r1-mot-cot-sft

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(487)
this model