personal merge highlights
					Collection
				
				9 items
				• 
				Updated
					
				•
					
					1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using Sao10K/L3-8B-Stheno-v3.2 + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
  - model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
  - model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated+kloodia/lora-8b-physic
  - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored+kloodia/lora-8b-medic
  - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama-3-8B-Abomination-LORA
merge_method: model_stock
base_model: Sao10K/L3-8B-Stheno-v3.2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
normalize: false
int8_mask: true
dtype: bfloat16
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 28.28 | 
| IFEval (0-Shot) | 71.41 | 
| BBH (3-Shot) | 32.53 | 
| MATH Lvl 5 (4-Shot) | 12.99 | 
| GPQA (0-shot) | 8.61 | 
| MuSR (0-shot) | 13.46 | 
| MMLU-PRO (5-shot) | 30.70 |