Llama
					Collection
				
Meta-based models
					• 
				1203 items
				• 
				Updated
					
				•
					
					1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
  - sources:
      - model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
        layer_range: [0, 32]
      - model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
        layer_range: [0, 32]
merge_method: slerp
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 19.57 | 
| IFEval (0-Shot) | 35.66 | 
| BBH (3-Shot) | 17.50 | 
| MATH Lvl 5 (4-Shot) | 33.16 | 
| GPQA (0-shot) | 5.59 | 
| MuSR (0-shot) | 6.12 | 
| MMLU-PRO (5-shot) | 19.37 |