--- base_model: - Qwen/Qwen2.5-7B-Instruct - Qwen/Qwen2.5-Math-7B-Instruct library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Benchmarks | Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr| |-----------------|-------|----------------|-----:|-----------|---|-----:|---|------| |tinyBenchmarks | N/A| | | | | | | | | - tinyArc | 0|none | 25|acc_norm |↑ |0.2407|± | N/A| | - tinyGSM8k | 0|flexible-extract | 5|exact_match|↑ |0.0055|± | N/A| | | |strict-match | 5|exact_match|↑ |0.0055|± | N/A| | - tinyHellaswag | 0|none | 10|acc_norm |↑ |0.3322|± | N/A| | - tinyMMLU | 0|none | 0|acc_norm |↑ |0.2839|± | N/A| | - tinyTruthfulQA| 0|none | 0|acc |↑ | NaN|± | N/A| | - tinyWinogrande| 0|none | 5|acc_norm |↑ |0.4491|± | N/A| ### Merge Method This model was merged using the NuSLERP merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) * [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Qwen/Qwen2.5-7B-Instruct layer_range: [0, 7] - model: Qwen/Qwen2.5-Math-7B-Instruct layer_range: [0, 7] parameters: weight: [0.85, 0.15] nuslerp_flatten: false - sources: - model: Qwen/Qwen2.5-7B-Instruct layer_range: [7, 14] - model: Qwen/Qwen2.5-Math-7B-Instruct layer_range: [7, 14] parameters: weight: [0.6, 0.4] nuslerp_flatten: false - sources: - model: Qwen/Qwen2.5-7B-Instruct layer_range: [14, 21] - model: Qwen/Qwen2.5-Math-7B-Instruct layer_range: [14, 21] parameters: weight: [0.4, 0.6] nuslerp_flatten: false - sources: - model: Qwen/Qwen2.5-7B-Instruct layer_range: [21, 28] - model: Qwen/Qwen2.5-Math-7B-Instruct layer_range: [21, 28] parameters: weight: [0.3, 0.7] nuslerp_flatten: false merge_method: nuslerp dtype: float16 ```