Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
Update: This turned out to be a mediocre model compared to its parents at even the IFEval task. Consider those models instead. 😕
This model was merged using the Model Stock merge method using Meta-Llama-3.1-8B-Instruct as a base using mergekit.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Configurable-Llama-3.1-8B-Instruct
- model: Llama-3.1-Tulu-3-8B
- model: Llama-3.1-8B-Ultra-Instruct
- model: Llama-3.1-Storm-8B
merge_method: model_stock
base_model: Meta-Llama-3.1-8B-Instruct
parameters:
normalize: true
weight: 1.0
dtype: bfloat16
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 26.62 |
| IFEval (0-Shot) | 76.52 |
| BBH (3-Shot) | 28.79 |
| MATH Lvl 5 (4-Shot) | 16.54 |
| GPQA (0-shot) | 2.35 |
| MuSR (0-shot) | 4.70 |
| MMLU-PRO (5-shot) | 30.84 |