File size: 1,586 Bytes
			
			| ff701a6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
license: apache-2.0
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
- Qwen/Qwen3-8B
pipeline_tag: text-generation
tags:
- merge
---
> [!TIP]
> The Karcher merge method does not require the use of a base model. Click [here](https://github.com/arcee-ai/mergekit/blob/main/docs/merge_methods.md#karcher-mean-karcher) for details.
# *Model Highlights:*
- ***merge method**: `karcher`*
- ***Highest precision**: `dtype: float32` + `out_dtype: bfloat16`*
- ***Brand-new chat template**: ensures normal operation on LM Studio*
- ***Context length**: `131072`*
## *Model Selection Table:*
|Model|Context|Uses Basic Model|
|---|---|---|
|[Qwen3-8B-YOYO-karcher](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-karcher)|32K|NO|
|[Qwen3-8B-YOYO-karcher-128K](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO-karcher-128K)|128K|NO|
|[Qwen3-EZO-8B-YOYO-karcher](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-karcher)|32K|NO|
|[Qwen3-EZO-8B-YOYO-karcher-128K](https://huggingface.co/YOYO-AI/Qwen3-EZO-8B-YOYO-karcher-128K)|128K|NO|
> **Warning**:
> *Models with `128K` context may have slight quality loss. In most cases, please use the `32K` native context!*
# *Parameter Settings*:
## *Thinking Mode:*
> [!NOTE]
> *`Temperature=0.6`, `TopP=0.95`, `TopK=20`,`MinP=0`.*
# *Configuration*:
*The following YAML configuration was used to produce this model:*
```yaml
models:
  - model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
  - model: Qwen/Qwen3-8B
merge_method: karcher
parameters:
  max_iter: 1000
dtype: float32
out_dtype: bfloat16
tokenizer_source: Qwen/Qwen3-8B
```
 |