merged-model
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using unsloth/DeepSeek-R1-Distill-Qwen-7B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# merge_ties.yml
# 1. Overall merge method: TIES (sign-elect sparse task arithmetic)
merge_method: ties                                      
# 2. Base model (all task vectors are computed relative to this checkpoint)
base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B   
# 3. Full models to merge (base first, then others)
models:
  - model: unsloth/DeepSeek-R1-Distill-Qwen-7B       # base has no extra params
  - model: nvidia/AceMath-7B-Instruct
    parameters:
      weight: 1.0
      density: 0.5
  - model: Qwen/Qwen2.5-Math-7B-Instruct
    parameters:
      weight: 1.0
      density: 0.5
# 4. Global merge parameters
parameters:
  normalize: true        # normalize weights across models
  int8_mask: true        # mask small values when using int8 backing
# 5. Data type for merged tensors
dtype: bfloat16
- Downloads last month
- -
Model tree for CK0607/Tie-Merged-Qwen-7B
Merge model
	
	
this model
	
							