prototype-0.4x325
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Multi-SLERP merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
Models Merged
The following models were included in the merge:
- /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
- /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc
Configuration
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc
parameters:
weight: [0.4]
- model: /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
parameters:
weight: [0.4]
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: multislerp
tokenizer:
source: base
chat_template: llama3
parameters:
normalize_weights: false
eps: 1e-9
pad_to_multiple_of: 8
int8_mask: true
dtype: float32
- Downloads last month
- 2