Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
β’
2311.03099
β’
Published
β’
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using mlabonne/NeuralBeagle14-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: mlabonne/NeuralBeagle14-7B
dtype: bfloat16
merge_method: dare_ties
models:
- model: mlabonne/NeuralBeagle14-7B
- model: mlabonne/AlphaMonarch-7B
parameters:
density: '0.53'
weight: '0.4'
- model: Intel/neural-chat-7b-v3-1
parameters:
density: '0.53'
weight: '0.3'
- model: HuggingFaceH4/zephyr-7b-beta
parameters:
density: '0.53'
weight: '0.3'
parameters:
int8_mask: true