ekurtic commited on
Commit
12a44c0
·
verified ·
1 Parent(s): c1bedff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -63,6 +63,12 @@ print(generated_text)
63
  vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
64
 
65
 
 
 
 
 
 
 
66
  ## Evaluation
67
 
68
  The model was evaluated on the OpenLLM leaderboard task (v1) via [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), and on popular reasoning tasks (AIME 2024, MATH-500, GPQA-Diamond) via [LightEval](https://github.com/huggingface/open-r1).
 
63
  vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
64
 
65
 
66
+ ## Creation
67
+
68
+ We created this model using **MoE-Quant**, a library developed jointly with **ISTA** and tailored for the quantization of very large Mixture-of-Experts (MoE) models.
69
+
70
+ For more details, please refer to the [MoE-Quant repository](https://github.com/IST-DASLab/MoE-Quant).
71
+
72
  ## Evaluation
73
 
74
  The model was evaluated on the OpenLLM leaderboard task (v1) via [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), and on popular reasoning tasks (AIME 2024, MATH-500, GPQA-Diamond) via [LightEval](https://github.com/huggingface/open-r1).