KMasaki commited on
Commit
25d1bab
·
verified ·
1 Parent(s): e75b199

End of training

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. config.json +1 -1
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
 
2
  library_name: transformers
3
  model_name: 8expert_2granularity_0shared_top2_0.52b-Distill
4
  tags:
5
  - generated_from_trainer
 
6
  - trl
7
  - sft
8
  licence: license
@@ -10,7 +12,7 @@ licence: license
10
 
11
  # Model Card for 8expert_2granularity_0shared_top2_0.52b-Distill
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
1
  ---
2
+ datasets: open-r1/OpenR1-Math-220k
3
  library_name: transformers
4
  model_name: 8expert_2granularity_0shared_top2_0.52b-Distill
5
  tags:
6
  - generated_from_trainer
7
+ - open-r1
8
  - trl
9
  - sft
10
  licence: license
 
12
 
13
  # Model Card for 8expert_2granularity_0shared_top2_0.52b-Distill
14
 
15
+ This model is a fine-tuned version of [None](https://huggingface.co/None) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
config.json CHANGED
@@ -29,6 +29,6 @@
29
  "tie_word_embeddings": false,
30
  "torch_dtype": "bfloat16",
31
  "transformers_version": "4.49.0",
32
- "use_cache": false,
33
  "vocab_size": 99584
34
  }
 
29
  "tie_word_embeddings": false,
30
  "torch_dtype": "bfloat16",
31
  "transformers_version": "4.49.0",
32
+ "use_cache": true,
33
  "vocab_size": 99584
34
  }