prototype-0.4x330
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Multi-SLERP merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
Models Merged
The following models were included in the merge:
- /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c
 - /workspace/cache/models--TheDrummer--Anubis-70B-v1.1/snapshots/47ea1a3368e8d161b09acbc8c211ba4212e4b466
 
Configuration
The following YAML configuration was used to produce this model:
models:
  - model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c
    parameters:
      weight: [0.5]
  - model: /workspace/cache/models--TheDrummer--Anubis-70B-v1.1/snapshots/47ea1a3368e8d161b09acbc8c211ba4212e4b466
    parameters:
      weight: [0.5]
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: multislerp
tokenizer:
  source: base
chat_template: llama3
parameters:
  normalize_weights: false
  eps: 1e-9
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
- Downloads last month
 - 2