Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model

This repository presents Ring-1T, an open-source, state-of-the-art thinking model with a trillion-scale parameter, as detailed in the paper Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model. For the full codebase, please refer to the GitHub repository.

πŸ€— Hugging Face   |   πŸ€– ModelScope    |   πŸ™ Experience Now

Today, we officially launch the trillion-parameter thinking model, Ring-1T. It is open-source upon releaseβ€”developers can download the model weights from Hugging Face and ModelScope, or experience direct chat interactions and API calls via the Ling Chat page and ZenMux (links provided at the end of the article).

Building upon the preview version released at the end of last month, Ring-1T has undergone continued scaling with large-scale verifiable reward reinforcement learning (RLVR) training, further unlocking the natural language reasoning capabilities of the trillion-parameter foundation model. Through RLHF training, the model's general abilities have also been refined, making this release of Ring-1T more balanced in performance across various tasks.

Ring-1T adopts the Ling 2.0 architecture and is trained on the Ling-1T-base foundation model, which contains 1 trillion total parameters with 50 billion activated parameters, supporting a context window of up to 128K tokens. Leveraging our self-developed icepop reinforcement learning stabilization method and the efficient reinforcement learning system ASystem (whose AReaL framework is already open-source), we have achieved smooth scaling of MoE architecture reinforcement learningβ€”from tens of billions (Ring-mini-2.0) to hundreds of billions (Ring-flash-2.0) to trillions (Ring-1T) of parametersβ€”significantly enhancing the model's deep reasoning and natural language inference capabilities.

Model Downloads

You can download Ring-1T from the following table. If you are located in mainland China, we also provide the model on ModelScope to speed up the download process.

Model Context Length Download
Ring-1T 64K -> 128K (YaRN) πŸ€— HuggingFace    πŸ€– ModelScope
Ring-1T-FP8 64K -> 128K (YaRN) πŸ€— HuggingFace    πŸ€– ModelScope

Note: If you are interested in previous version, please visit the past model collections in Huggingface or ModelScope.

Continuously Evolving Deep Reasoning Capabilities

To evaluate the deep reasoning capabilities of Ring-1T, we selected representative open-source thinking models (Ring-1T-preview, Deepseek-V3.1-Terminus-Thinking, Qwen-235B-A22B-Thinking-2507) and closed-source APIs (Gemini-2.5-Pro and GPT-5-Thinking(High)) as benchmarks. First, compared to the previously open-sourced preview version, Ring-1T demonstrates more balanced performance across various tasks. Furthermore, Ring-1T achieves open-source leading performance on challenging reasoning benchmarks such as math competitions (AIME 25, HMMT 25), code generation (LiveCodeBench, CodeForce), and logical reasoning (ARC-AGI-v1). It also exhibits strong competitiveness in comprehensive tasks (Arena-Hard-v2.0), healthcare (HealthBench), and creative writing (Creative Writing v3).

Although we have implemented string-level and semantic-level contamination filtering for benchmark tasks across all training stagesβ€”including pre-training, fine-tuning instructions, and reinforcement learning promptsβ€”rigorous decontamination for earlier published benchmarks remains a significant challenge in the industry. To more objectively analyze Ring-1T's deep reasoning capabilities, we conducted tests using the IMO 2025 (International Mathematical Olympiad) held in July this year and the recently concluded ICPC World Finals 2025 (International Collegiate Programming Contest World Finals).

For the IMO 2025 test, similar to the previous preview version, we integrated Ring-1T into the multi-agent framework AWorld (https://github.com/inclusionAI/AWorld) and used pure natural language reasoning to solve the problems. The results show that Ring-1T solved Problems 1, 3, 4, and 5 in a single attempt (silver medal level at IMO). On the third attempt, it also produced a nearly perfect proof for Problem 2, a geometry proof. For the most challenging Problem 6 (which no AI contestant in IMO 2025 solved correctly), Ring-1T converged to the same answer as Gemini 2.5 Proβ€”"4048" (the correct answer is 2112). We believe that with ongoing optimizations, Ring-1T has the potential to reach gold medal level at IMO in a single attempt in the future.

At the ICPC World Finals 2025, we compared GPT-5-Thinking, Gemini-2.5-Pro, and Ring-1T. In a test allowing three attempts for direct problem-solving by the models, they solved 6 (problems CDEFKL), 3 (problems DFK), and 5 (problems DFJKL) problems, respectively. The results demonstrate that Ring-1T also delivers outstanding performance in top-tier international programming competitions. Further testing is ongoing, and we will also open-source the solution traces of the models for the aforementioned competitions (IMO traces are provided at the end of the article). We look forward to collaborating with the community to further optimize the reasoning potential of this trillion-parameter thinking model.

Icepop: Ensuring Stable Reinforcement Learning Through Long-Term Training

In the reinforcement learning training of MoE models, the discrepancies in operator implementations between the training and inference engines are more pronounced compared to dense models. This divergence becomes increasingly significant as sequence length and training steps accumulate, particularly during long-sequence generation and extended training cycles. As illustrated in the experiment below, the original GRPO algorithm begins to collapse after relatively few training steps. In contrast, our proposed Icepop algorithm mitigates this issue by correcting distributions through masked bidirectional truncation technology, effectively reducing the gap between training and inference phasesβ€”thereby "cooling down" the rapidly escalating training-inference discrepancy.

Figure 1: The training-inference discrepancy of GRPO increases exponentially with training, while Icepop remains relatively stable.

Figure 2: Maximum training-inference discrepancyβ€”GRPO shows a significant rise with training, whereas Icepop maintains a low level.

Asystem: In-House RL Framework "Mastering" Trillion-Scale Training

To ensure stable and efficient reinforcement learning training for trillion-parameter foundation models, we independently developed a high-performance reinforcement learning systemβ€”ASystem. ASystem adopts a SingleController + SPMD architecture. In terms of training and inference engines, it has been meticulously optimized to address memory management and weight exchange challenges specific to trillion-parameter models. Leveraging our self-developed unified memory pool technology for training and inference, it achieves transparent memory offloading, efficiently releases memory fragmentation, and reduces the risk of insufficient memory. Through techniques such as direct P2P communication between GPUs and in-place updates, it enables second-level, zero-redundant model weight exchange.

For the RL training framework, we built a hybrid reward system based on large-scale Serverless Sandbox technology. This system can start up in milliseconds, supports execution environments for over 10 programming languages, and handles request throughput of up to 10K/s. We have open-sourced AReaL and hope to accelerate RL training and research in the open-source community through technological openness.

Quickstart

πŸš€ Try Online

You can experience Ring-1T online at: ZenMux

πŸ€— Hugging Face Transformers

Here is a code snippet to show you how to use the chat model with transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "inclusionAI/Ring-flash-2.0" # Note: This example uses Ring-flash-2.0, replace with inclusionAI/Ring-1T if desired.

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    dtype="auto",
    device_map="auto",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language models."
messages = [
    {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=8192
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

πŸ”Œ API Usage

You can also use Ring-1T through API calls:

from openai import OpenAI

# 1. Initialize the OpenAI client
client = OpenAI(
    # 2. Point the base URL to the ZenMux endpoint
    base_url="https://zenmux.ai/api/v1",
    # 3. Replace with the API Key from your ZenMux user console
    api_key="<your ZENMUX_API_KEY>",
)

# 4. Make a request
completion = client.chat.completions.create(
    # 5. Specify the model to use in the format "provider/model-name"
    model="inclusionai/ring-1t",
    messages=[
        {
            "role": "user",
            "content": "What is the meaning of life?"
        }
    ]
)

print(completion.choices[0].message.content)

Deployment

SGLang

Environment Preparation

We will later submit our model to SGLang official release, now we can prepare the environment following steps:

pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1

You can use docker image as well:

docker pull lmsysorg/sglang:v0.5.2rc0-cu126

Then you should apply patch to sglang installation:

# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch

Run Inference

BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:

  • Start server:
python -m sglang.launch_server \
    --model-path $MODLE_PATH \
    --host 0.0.0.0 --port $PORT \
    --trust-remote-code \
    --attention-backend fa3

MTP is supported for base model, and not yet for chat model. You can add parameter --speculative-algorithm NEXTN to start command.

  • Client:
curl -s http://localhost:${PORT}/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'

More usage can be found here

vLLM

vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.

Environment Preparation

Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:

git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ring-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .

Offline Inference:

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-1T") # Changed from Ring-flash-2.0 for consistency

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)

llm = LLM(model="inclusionAI/Ring-1T", dtype='bfloat16') # Changed from Ring-flash-2.0 for consistency
prompt = "Give me a short introduction to large language models."
messages = [
    {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)

Online Inference:

vllm serve inclusionAI/Ring-1T \
              --tensor-parallel-size 2 \
              --pipeline-parallel-size 1 \
              --use-v2-block-manager \
              --gpu-memory-utilization 0.90

To handle long context in vLLM using YaRN, we need to follow these two steps:

  1. Add a rope_scaling field to the model's config.json file, for example:
{
  ...,
  "rope_scaling": {
    "factor": 4.0,
    "original_max_position_embeddings": 32768,
    "type": "yarn"
  }
}
  1. Use an additional parameter --max-model-len to specify the desired maximum context length when starting the vLLM service.

For detailed guidance, please refer to the vLLM instructions.

Finetuning

We recommend you to use Llama-Factory to finetune Ring.

Limitations and Future Plans

Ring-1T represents the Bailing team’s first attempt at developing a trillion-scale deep thinking model. The current version may occasionally exhibit issues such as identity recognition bias, language mixing, and repetitive generation. Additionally, since its attention architecture still adopts the GQA approach from Ling 2.0, there remains room for improvement in inference efficiency under long-context scenarios.

We will continue to optimize these aspects in future releases and highly welcome feedback from the community. Furthermore, training for Ring-1T is still ongoing. We are committed to further unlocking the reasoning potential of this trillion-parameter foundation model and look forward to sharing more mature upgraded versions with everyone as soon as possible.

Welcome to visit our open-source repository and demo page for download and usage.

Hugging Face: https://huggingface.co/inclusionAI/Ring-1T

ModelScope: https://modelscope.cn/models/inclusionAI/Ring-1T

Ling Chat (for Chinese users): https://ling.tbox.cn/chat

ZenMux (for overseas developers, offering Chat testing and API capabilities): https://zenmux.ai/inclusionai/ring-1t?utm_source=hf_inclusionAI

Ring-1T@Aworld IMO test trajectory: https://github.com/inclusionAI/AWorld/tree/main/examples/imo/samples/samples%20from%20Ring-1T

License

This code repository is licensed under the MIT License.

Citation

If you find our work helpful, feel free to give us a cite.

@inproceedings{lingteam2025ring1t,
      title={Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model},
      author={Ling Team and Anqi Shen and Baihui Li and Bin Hu and Bin Jing and Cai Chen and Chao Huang and Chao Zhang and Chaokun Yang and Cheng Lin and Chengyao Wen and Congqi Li and Deng Zhao and Dingbo Yuan and Donghai You and Fagui Mao and Fanzhuang Meng and Feng Xu and Guojie Li and Guowei Wang and Hao Dai and Haonan Zheng and Hong Liu and Jia Guo and Jiaming Liu and Jian Liu and Jianhao Fu and Jiannan Shi and Jianwen Wang and Jianxin Lai and Jin Yang and Jun Mei and Jun Zhou and Junbo Zhao and Junping Zhao and Kuan Xu and Le Su and Lei Chen and Li Tang and Liang Jiang and Liangcheng Fu and Lianhao Xu and Linfeng Shi and Lisha Liao and Longfei Zheng and Meng Li and Mingchun Chen and Qi Zuo and Qiang Cheng and Qianggang Cao and Qitao Shi and Quanrui Guo and Senlin Zhu and Shaofei Wang and Shaomian Zheng and Shuaicheng Li and Shuwei Gu and Siba Chen and Tao Wu and Tao Zhang and Tianyu Zhang and Tianyu Zhou and Tiwei Bie and Tongkai Yang and Wang Hong and Wang Ren and Weihua Chen and Wenbo Yu and Wengang Zheng and Xiangchun Wang and Xiaodong Yan and Xiaopei Wan and Xin Zhao and Xinyu Kong and Xinyu Tang and Xudong Han and Xudong Wang and Xuemin Yang and Xueyu Hu and Yalin Zhang and Yan Sun and Yicheng Shan and Yilong Wang and Yingying Xu and Yongkang Liu and Yongzhen Guo and Yuanyuan Wang and Yuchen Yan and Yuefan Wang and Yuhong Guo and Zehuan Li and Zhankai Xu and Zhe Li and Zhenduo Zhang and Zhengke Gui and Zhenxuan Pan and Zhenyu Huang and Zhenzhong Lan and Zhiqiang Ding and Zhiqiang Zhang and Zhixun Li and Zhizhen Liu and Zihao Wang and Zujie Wen},
      year={2025},
      eprint={2510.18855},\
      archivePrefix={arXiv},\
      primaryClass={cs.LG}\
}
Downloads last month
732
Safetensors
Model size
1000B params
Tensor type
BF16
Β·
F8_E4M3
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including inclusionAI/Ring-1T-FP8