merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della merge method using Qwen/Qwen2.5-7B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
  - model: Qwen/Qwen2.5-7B-Instruct
    parameters:
      weight: 1 
      density: 1
      lambda: 0.9
merge_method: della
base_model: Qwen/Qwen2.5-7B
parameters:
  weight: 1 
  density: 1
  lambda: 0.9
  int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 31.83 | 
| IFEval (0-Shot) | 62.95 | 
| BBH (3-Shot) | 36.85 | 
| MATH Lvl 5 (4-Shot) | 30.59 | 
| GPQA (0-shot) | 8.95 | 
| MuSR (0-shot) | 12.89 | 
| MMLU-PRO (5-shot) | 38.75 | 
- Downloads last month
- 1
Model tree for Etherll/Qwen2.5-7B-della-test
Merge model
	
	
this model
	
							Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard62.950
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard36.850
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard30.590
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.950
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.890
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard38.750