gpt-oss-20b-speculator.eagle3

Model Overview

  • Verifier: openai/gpt-oss-20b
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 11/21/2025
  • Version: 2.0
  • Model Developers: RedHat

This is a speculator model designed for use with openai/gpt-oss-20b, based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered dataset and the train_sft split of the HuggingFaceH4/ultrachat_200k dataset. This model should be used with the openai/gpt-oss-20b chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve openai/gpt-oss-20b \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/gpt-oss-20b-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'

Evaluations

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 168
Math Reasoning gsm8k 80
Text Summarization CNN/Daily Mail 80

Acceptance lengths

Use Case k=1 k=2 k=3 k=4 k=5 k=6 k=7
Coding 1.67 2.06 2.38 2.41 2.52 2.78 2.61
Math Reasoning 1.80 2.38 2.90 2.89 1.96 3.48 3.20
Text Summarization 1.63 2.05 2.18 2.31 2.33 2.38 2.35
Details Configuration
  • temperature: 0.6
  • top_p: 0.95
  • top_k: 20
  • repetitions: 3
  • time per experiment: 10min
  • hardware: 2xA100
  • vLLM version: 0.11.0
  • GuideLLM version: 0.3.0

Command

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/SpeculativeDecoding" \
  --rate-type sweep \
  --max-seconds 600 \
  --output-path "gpt-oss-20b-HumanEval.json" \
  --backend-args '{"extra_body": {"chat_completions": {"temperature":0.0}}}'

</details>
Downloads last month
26,313
Safetensors
Model size
0.9B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/gpt-oss-20b-speculator.eagle3