Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +18 -35
- chat_template.jinja +1 -26
- config.json +3 -3
- generation_config.json +4 -5
- model-00001-of-00032.safetensors → model-00001-of-00041.safetensors +2 -2
- model-00002-of-00032.safetensors → model-00002-of-00041.safetensors +2 -2
- model-00003-of-00032.safetensors → model-00003-of-00041.safetensors +2 -2
- model-00004-of-00032.safetensors → model-00004-of-00041.safetensors +2 -2
- model-00005-of-00032.safetensors +0 -3
- model-00005-of-00041.safetensors +3 -0
- model-00006-of-00032.safetensors +0 -3
- model-00006-of-00041.safetensors +3 -0
- model-00007-of-00032.safetensors +0 -3
- model-00007-of-00041.safetensors +3 -0
- model-00008-of-00032.safetensors +0 -3
- model-00008-of-00041.safetensors +3 -0
- model-00009-of-00032.safetensors +0 -3
- model-00009-of-00041.safetensors +3 -0
- model-00010-of-00032.safetensors +0 -3
- model-00010-of-00041.safetensors +3 -0
- model-00011-of-00032.safetensors +0 -3
- model-00011-of-00041.safetensors +3 -0
- model-00012-of-00032.safetensors +0 -3
- model-00012-of-00041.safetensors +3 -0
- model-00013-of-00032.safetensors +0 -3
- model-00013-of-00041.safetensors +3 -0
- model-00014-of-00032.safetensors +0 -3
- model-00014-of-00041.safetensors +3 -0
- model-00015-of-00032.safetensors +0 -3
- model-00015-of-00041.safetensors +3 -0
- model-00016-of-00032.safetensors +0 -3
- model-00016-of-00041.safetensors +3 -0
- model-00017-of-00032.safetensors +0 -3
- model-00017-of-00041.safetensors +3 -0
- model-00018-of-00032.safetensors +0 -3
- model-00018-of-00041.safetensors +3 -0
- model-00019-of-00032.safetensors +0 -3
- model-00019-of-00041.safetensors +3 -0
- model-00020-of-00032.safetensors +0 -3
- model-00020-of-00041.safetensors +3 -0
- model-00021-of-00032.safetensors +0 -3
- model-00021-of-00041.safetensors +3 -0
- model-00022-of-00032.safetensors +0 -3
- model-00022-of-00041.safetensors +3 -0
- model-00023-of-00032.safetensors +0 -3
- model-00023-of-00041.safetensors +3 -0
- model-00024-of-00032.safetensors +0 -3
- model-00024-of-00041.safetensors +3 -0
- model-00025-of-00032.safetensors +0 -3
- model-00025-of-00041.safetensors +3 -0
README.md
CHANGED
|
@@ -8,23 +8,6 @@ license: apache-2.0
|
|
| 8 |
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob/main/LICENSE
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
---
|
| 11 |
-
<div>
|
| 12 |
-
<p style="margin-top: 0;margin-bottom: 0;">
|
| 13 |
-
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
|
| 14 |
-
</p>
|
| 15 |
-
<div style="display: flex; gap: 5px; align-items: center; ">
|
| 16 |
-
<a href="https://github.com/unslothai/unsloth/">
|
| 17 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
|
| 18 |
-
</a>
|
| 19 |
-
<a href="https://discord.gg/unsloth">
|
| 20 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
| 21 |
-
</a>
|
| 22 |
-
<a href="https://docs.unsloth.ai/">
|
| 23 |
-
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
| 24 |
-
</a>
|
| 25 |
-
</div>
|
| 26 |
-
</div>
|
| 27 |
-
|
| 28 |
|
| 29 |
# Qwen3-Next-80B-A3B-Instruct
|
| 30 |
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
|
@@ -49,7 +32,7 @@ We are seeing strong performance in terms of both parameter efficiency and infer
|
|
| 49 |
|
| 50 |

|
| 51 |
|
| 52 |
-
For more details, please refer to our blog post [Qwen3-Next](https://
|
| 53 |
|
| 54 |
## Model Overview
|
| 55 |
|
|
@@ -61,9 +44,9 @@ For more details, please refer to our blog post [Qwen3-Next](https://qwenlm.gith
|
|
| 61 |
- Training Stage: Pretraining (15T tokens) & Post-training
|
| 62 |
- Number of Parameters: 80B in total and 3B activated
|
| 63 |
- Number of Paramaters (Non-Embedding): 79B
|
| 64 |
-
- Number of Layers: 48
|
| 65 |
- Hidden Dimension: 2048
|
| 66 |
-
-
|
|
|
|
| 67 |
- Gated Attention:
|
| 68 |
- Number of Attention Heads: 16 for Q and 2 for KV
|
| 69 |
- Head Dimension: 256
|
|
@@ -178,7 +161,7 @@ print("content:", content)
|
|
| 178 |
|
| 179 |
> [!Tip]
|
| 180 |
> Depending on the inference settings, you may observe better efficiency with [`flash-linear-attention`](https://github.com/fla-org/flash-linear-attention#installation) and [`causal-conv1d`](https://github.com/Dao-AILab/causal-conv1d).
|
| 181 |
-
> See the
|
| 182 |
|
| 183 |
|
| 184 |
## Deployment
|
|
@@ -190,52 +173,52 @@ For deployment, you can use the latest `sglang` or `vllm` to create an OpenAI-co
|
|
| 190 |
[SGLang](https://github.com/sgl-project/sglang) is a fast serving framework for large language models and vision language models.
|
| 191 |
SGLang could be used to launch a server with OpenAI-compatible API service.
|
| 192 |
|
| 193 |
-
|
| 194 |
```shell
|
| 195 |
-
pip install 'sglang[all]
|
| 196 |
```
|
|
|
|
| 197 |
|
| 198 |
The following command can be used to create an API endpoint at `http://localhost:30000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
|
| 199 |
```shell
|
| 200 |
-
|
| 201 |
```
|
| 202 |
|
| 203 |
The following command is recommended for MTP with the rest settings the same as above:
|
| 204 |
```shell
|
| 205 |
-
|
| 206 |
```
|
| 207 |
|
| 208 |
> [!Note]
|
| 209 |
-
> The
|
| 210 |
|
| 211 |
-
|
| 212 |
-
> The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fail to start.
|
| 213 |
|
| 214 |
### vLLM
|
| 215 |
|
| 216 |
[vLLM](https://github.com/vllm-project/vllm) is a high-throughput and memory-efficient inference and serving engine for LLMs.
|
| 217 |
vLLM could be used to launch a server with OpenAI-compatible API service.
|
| 218 |
|
| 219 |
-
|
| 220 |
```shell
|
| 221 |
-
pip install vllm
|
| 222 |
```
|
|
|
|
| 223 |
|
| 224 |
The following command can be used to create an API endpoint at `http://localhost:8000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
|
| 225 |
```shell
|
| 226 |
-
|
| 227 |
```
|
| 228 |
|
| 229 |
The following command is recommended for MTP with the rest settings the same as above:
|
| 230 |
```shell
|
| 231 |
-
|
| 232 |
```
|
| 233 |
|
| 234 |
> [!Note]
|
| 235 |
-
> The
|
| 236 |
|
| 237 |
-
|
| 238 |
-
> The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fail to start.
|
| 239 |
|
| 240 |
## Agentic Use
|
| 241 |
|
|
|
|
| 8 |
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob/main/LICENSE
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
# Qwen3-Next-80B-A3B-Instruct
|
| 13 |
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
|
|
|
| 32 |
|
| 33 |

|
| 34 |
|
| 35 |
+
For more details, please refer to our blog post [Qwen3-Next](https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list).
|
| 36 |
|
| 37 |
## Model Overview
|
| 38 |
|
|
|
|
| 44 |
- Training Stage: Pretraining (15T tokens) & Post-training
|
| 45 |
- Number of Parameters: 80B in total and 3B activated
|
| 46 |
- Number of Paramaters (Non-Embedding): 79B
|
|
|
|
| 47 |
- Hidden Dimension: 2048
|
| 48 |
+
- Number of Layers: 48
|
| 49 |
+
- Hybrid Layout: 12 \* (3 \* (Gated DeltaNet -> MoE) -> 1 \* (Gated Attention -> MoE))
|
| 50 |
- Gated Attention:
|
| 51 |
- Number of Attention Heads: 16 for Q and 2 for KV
|
| 52 |
- Head Dimension: 256
|
|
|
|
| 161 |
|
| 162 |
> [!Tip]
|
| 163 |
> Depending on the inference settings, you may observe better efficiency with [`flash-linear-attention`](https://github.com/fla-org/flash-linear-attention#installation) and [`causal-conv1d`](https://github.com/Dao-AILab/causal-conv1d).
|
| 164 |
+
> See the links for detailed instructions and requirements.
|
| 165 |
|
| 166 |
|
| 167 |
## Deployment
|
|
|
|
| 173 |
[SGLang](https://github.com/sgl-project/sglang) is a fast serving framework for large language models and vision language models.
|
| 174 |
SGLang could be used to launch a server with OpenAI-compatible API service.
|
| 175 |
|
| 176 |
+
`sglang>=0.5.2` is required for Qwen3-Next, which can be installed using:
|
| 177 |
```shell
|
| 178 |
+
pip install 'sglang[all]>=0.5.2'
|
| 179 |
```
|
| 180 |
+
See [its documentation](https://docs.sglang.ai/get_started/install.html) for more details.
|
| 181 |
|
| 182 |
The following command can be used to create an API endpoint at `http://localhost:30000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
|
| 183 |
```shell
|
| 184 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Instruct --port 30000 --tp-size 4 --context-length 262144 --mem-fraction-static 0.8
|
| 185 |
```
|
| 186 |
|
| 187 |
The following command is recommended for MTP with the rest settings the same as above:
|
| 188 |
```shell
|
| 189 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-Next-80B-A3B-Instruct --port 30000 --tp-size 4 --context-length 262144 --mem-fraction-static 0.8 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
|
| 190 |
```
|
| 191 |
|
| 192 |
> [!Note]
|
| 193 |
+
> The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fails to start.
|
| 194 |
|
| 195 |
+
Please also refer to SGLang's usage guide on [Qwen3-Next](https://docs.sglang.ai/basic_usage/qwen3.html).
|
|
|
|
| 196 |
|
| 197 |
### vLLM
|
| 198 |
|
| 199 |
[vLLM](https://github.com/vllm-project/vllm) is a high-throughput and memory-efficient inference and serving engine for LLMs.
|
| 200 |
vLLM could be used to launch a server with OpenAI-compatible API service.
|
| 201 |
|
| 202 |
+
`vllm>=0.10.2` is required for Qwen3-Next, which can be installed using:
|
| 203 |
```shell
|
| 204 |
+
pip install 'vllm>=0.10.2'
|
| 205 |
```
|
| 206 |
+
See [its documentation](https://docs.vllm.ai/en/stable/getting_started/installation/index.html) for more details.
|
| 207 |
|
| 208 |
The following command can be used to create an API endpoint at `http://localhost:8000/v1` with maximum context length 256K tokens using tensor parallel on 4 GPUs.
|
| 209 |
```shell
|
| 210 |
+
vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --port 8000 --tensor-parallel-size 4 --max-model-len 262144
|
| 211 |
```
|
| 212 |
|
| 213 |
The following command is recommended for MTP with the rest settings the same as above:
|
| 214 |
```shell
|
| 215 |
+
vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct --port 8000 --tensor-parallel-size 4 --max-model-len 262144 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
|
| 216 |
```
|
| 217 |
|
| 218 |
> [!Note]
|
| 219 |
+
> The default context length is 256K. Consider reducing the context length to a smaller value, e.g., `32768`, if the server fails to start.
|
| 220 |
|
| 221 |
+
Please also refer to vLLM's usage guide on [Qwen3-Next](https://docs.vllm.ai/projects/recipes/en/latest/Qwen/Qwen3-Next.html).
|
|
|
|
| 222 |
|
| 223 |
## Agentic Use
|
| 224 |
|
chat_template.jinja
CHANGED
|
@@ -14,14 +14,6 @@
|
|
| 14 |
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
| 15 |
{%- endif %}
|
| 16 |
{%- endif %}
|
| 17 |
-
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
| 18 |
-
{%- for message in messages[::-1] %}
|
| 19 |
-
{%- set index = (messages|length - 1) - loop.index0 %}
|
| 20 |
-
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
| 21 |
-
{%- set ns.multi_step_tool = false %}
|
| 22 |
-
{%- set ns.last_query_index = index %}
|
| 23 |
-
{%- endif %}
|
| 24 |
-
{%- endfor %}
|
| 25 |
{%- for message in messages %}
|
| 26 |
{%- if message.content is string %}
|
| 27 |
{%- set content = message.content %}
|
|
@@ -31,24 +23,7 @@
|
|
| 31 |
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
| 32 |
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
| 33 |
{%- elif message.role == "assistant" %}
|
| 34 |
-
{
|
| 35 |
-
{%- if message.reasoning_content is string %}
|
| 36 |
-
{%- set reasoning_content = message.reasoning_content %}
|
| 37 |
-
{%- else %}
|
| 38 |
-
{%- if '</think>' in content %}
|
| 39 |
-
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
| 40 |
-
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
| 41 |
-
{%- endif %}
|
| 42 |
-
{%- endif %}
|
| 43 |
-
{%- if loop.index0 > ns.last_query_index %}
|
| 44 |
-
{%- if loop.last or (not loop.last and reasoning_content) %}
|
| 45 |
-
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
| 46 |
-
{%- else %}
|
| 47 |
-
{{- '<|im_start|>' + message.role + '\n' + content }}
|
| 48 |
-
{%- endif %}
|
| 49 |
-
{%- else %}
|
| 50 |
-
{{- '<|im_start|>' + message.role + '\n' + content }}
|
| 51 |
-
{%- endif %}
|
| 52 |
{%- if message.tool_calls %}
|
| 53 |
{%- for tool_call in message.tool_calls %}
|
| 54 |
{%- if (loop.first and content) or (not loop.first) %}
|
|
|
|
| 14 |
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
| 15 |
{%- endif %}
|
| 16 |
{%- endif %}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
{%- for message in messages %}
|
| 18 |
{%- if message.content is string %}
|
| 19 |
{%- set content = message.content %}
|
|
|
|
| 23 |
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
| 24 |
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
| 25 |
{%- elif message.role == "assistant" %}
|
| 26 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
{%- if message.tool_calls %}
|
| 28 |
{%- for tool_call in message.tool_calls %}
|
| 29 |
{%- if (loop.first and content) or (not loop.first) %}
|
config.json
CHANGED
|
@@ -5,7 +5,7 @@
|
|
| 5 |
"attention_bias": false,
|
| 6 |
"attention_dropout": 0.0,
|
| 7 |
"decoder_sparse_step": 1,
|
| 8 |
-
"
|
| 9 |
"eos_token_id": 151645,
|
| 10 |
"full_attention_interval": 4,
|
| 11 |
"head_dim": 256,
|
|
@@ -87,9 +87,9 @@
|
|
| 87 |
"router_aux_loss_coef": 0.001,
|
| 88 |
"shared_expert_intermediate_size": 512,
|
| 89 |
"tie_word_embeddings": false,
|
| 90 |
-
"transformers_version": "4.57.
|
| 91 |
"unsloth_fixed": true,
|
| 92 |
"use_cache": true,
|
| 93 |
"use_sliding_window": false,
|
| 94 |
"vocab_size": 151936
|
| 95 |
-
}
|
|
|
|
| 5 |
"attention_bias": false,
|
| 6 |
"attention_dropout": 0.0,
|
| 7 |
"decoder_sparse_step": 1,
|
| 8 |
+
"torch_dtype": "bfloat16",
|
| 9 |
"eos_token_id": 151645,
|
| 10 |
"full_attention_interval": 4,
|
| 11 |
"head_dim": 256,
|
|
|
|
| 87 |
"router_aux_loss_coef": 0.001,
|
| 88 |
"shared_expert_intermediate_size": 512,
|
| 89 |
"tie_word_embeddings": false,
|
| 90 |
+
"transformers_version": "4.57.3",
|
| 91 |
"unsloth_fixed": true,
|
| 92 |
"use_cache": true,
|
| 93 |
"use_sliding_window": false,
|
| 94 |
"vocab_size": 151936
|
| 95 |
+
}
|
generation_config.json
CHANGED
|
@@ -2,13 +2,12 @@
|
|
| 2 |
"bos_token_id": 151643,
|
| 3 |
"do_sample": true,
|
| 4 |
"eos_token_id": [
|
| 5 |
-
|
| 6 |
-
|
| 7 |
],
|
| 8 |
-
"
|
| 9 |
-
"pad_token_id": 151654,
|
| 10 |
"temperature": 0.7,
|
| 11 |
"top_k": 20,
|
| 12 |
"top_p": 0.8,
|
| 13 |
"transformers_version": "4.57.0.dev0"
|
| 14 |
-
}
|
|
|
|
| 2 |
"bos_token_id": 151643,
|
| 3 |
"do_sample": true,
|
| 4 |
"eos_token_id": [
|
| 5 |
+
151645,
|
| 6 |
+
151643
|
| 7 |
],
|
| 8 |
+
"pad_token_id": 151643,
|
|
|
|
| 9 |
"temperature": 0.7,
|
| 10 |
"top_k": 20,
|
| 11 |
"top_p": 0.8,
|
| 12 |
"transformers_version": "4.57.0.dev0"
|
| 13 |
+
}
|
model-00001-of-00032.safetensors → model-00001-of-00041.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d8908fd48650169854b5ed815cb01bfc1741152cc58795ec99188862c6a29e11
|
| 3 |
+
size 3999619256
|
model-00002-of-00032.safetensors → model-00002-of-00041.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c753c9bfaca220781d4030c3a99e69b4a256434c9d6ec223f5147edc265289df
|
| 3 |
+
size 3999841784
|
model-00003-of-00032.safetensors → model-00003-of-00041.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:51aaa14dd50c5ab90c363227bfb1ac51182118f588e0141dcaadda012548407c
|
| 3 |
+
size 3999515584
|
model-00004-of-00032.safetensors → model-00004-of-00041.safetensors
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82a33096134fb6e7751a423f142e79b1bbe89f45242b75397c2d7170fffa75bd
|
| 3 |
+
size 3999842000
|
model-00005-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:cdf08ad65a27ac7a307e3fc6476a346d6339890a4bfd5c5488a23f5a47be0a1e
|
| 3 |
-
size 5000253760
|
|
|
|
|
|
|
|
|
|
|
|
model-00005-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:41abda7bf93f27c36cb28382114f5b33defa6dbfd154abb6886a6fec20f9e479
|
| 3 |
+
size 3999842208
|
model-00006-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f8f6327e6c6e4aefaed45a41225829a89e39f1337ee4b86165fbcdd8be089ce2
|
| 3 |
-
size 5000242936
|
|
|
|
|
|
|
|
|
|
|
|
model-00006-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7794475040ebd62c9a1f9c94c17e7c600873e14fcabc656b32faea09bc7fd2d
|
| 3 |
+
size 3999853216
|
model-00007-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:b8456f00ba6bfda31ff52ea3f9f2e4184fa1cbbc0a33414dc8a391604e60d62f
|
| 3 |
-
size 4998483600
|
|
|
|
|
|
|
|
|
|
|
|
model-00007-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:64d7e90d00ce15cc8bbf7677ad07bf3cffc9e90a1b777e3334fa15ffea219d6b
|
| 3 |
+
size 3999841912
|
model-00008-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:737898dead41220f6f66f579404e6b312bb5bab2a9c186974c3875eb26423226
|
| 3 |
-
size 4999918888
|
|
|
|
|
|
|
|
|
|
|
|
model-00008-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ace9b99e71490d619656956d457744f727647b3f36fa3d801798a5156599d35
|
| 3 |
+
size 3999842000
|
model-00009-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:401f8c104d0899336b5968a1c8c5cc31a872d0a171f5e78a89e9f48c73dfc564
|
| 3 |
-
size 4998485240
|
|
|
|
|
|
|
|
|
|
|
|
model-00009-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5b1b374c9e6100077c446a293566c1f644475c94739ab08b7fa26ba847110216
|
| 3 |
+
size 3999843192
|
model-00010-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:9e2c7a15f319161a098295c6d0db138f98ed0b0b9c4a913faa1574ee3a7c387d
|
| 3 |
-
size 5000245296
|
|
|
|
|
|
|
|
|
|
|
|
model-00010-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:38e51dc850a39c325324e6ddd23347e6e3e76cba1341aeef2e1f260ca2cb9f49
|
| 3 |
+
size 3999517808
|
model-00011-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:cbd1ccc07aa616fb304679fe82ca24ec7477d11824ebc530148b9c045f087588
|
| 3 |
-
size 5000256048
|
|
|
|
|
|
|
|
|
|
|
|
model-00011-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c06719dc79fbbc796b8751ccbb29c8ecbdb686de9f89e3c741b5bbfec203cf0c
|
| 3 |
+
size 4000181296
|
model-00012-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:21bde7507ccbeca97ecbc8a5ce001c1daae011d6bf0466bb8b7937fc9adf8cae
|
| 3 |
-
size 4998485520
|
|
|
|
|
|
|
|
|
|
|
|
model-00012-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:349036c3e3f6a8645af47906dae7272bd5352f822a3c354a16ef4084b3555679
|
| 3 |
+
size 3999843880
|
model-00013-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:f3fdf4aedd2b00ff14b1605364e411d06033a0d42902eab93bda53dd27af3502
|
| 3 |
-
size 4999918568
|
|
|
|
|
|
|
|
|
|
|
|
model-00013-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bfab17d1535ef23e99e54ff3156ce1456aa5f0a4a93b281a7adb9d8b921dc829
|
| 3 |
+
size 3999517472
|
model-00014-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e30f9097af85d1aeede2d745dbfcf1121fdd926ed625efb1b85e15aafe86c5e3
|
| 3 |
-
size 4998485432
|
|
|
|
|
|
|
|
|
|
|
|
model-00014-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:318d10df78f1647189941884acc01616a6a78d5d45d74ea30955be7f509c580a
|
| 3 |
+
size 3999843984
|
model-00015-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:4ee3eb1637548252afddec7cb29d6f3d6c20f8c32cca30315fed46210c8a9359
|
| 3 |
-
size 5000245056
|
|
|
|
|
|
|
|
|
|
|
|
model-00015-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa3400abde789ecca625b8cea37aa31232bf6696923f10981d028518a2393173
|
| 3 |
+
size 4000181736
|
model-00016-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:eb9f2026ea0fbb241271e73fe08a441cc704c5048c34e85497ddc7a89d359f48
|
| 3 |
-
size 5000256248
|
|
|
|
|
|
|
|
|
|
|
|
model-00016-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3776700f68222f0174bb14fea8480da41d928a1bfd6b5317c4fc2b14006d506e
|
| 3 |
+
size 3999517256
|
model-00017-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:7a1e9232ffa2e8547a154698cf4fe0b98a2759f1bf49c1e166be6953be3fa6eb
|
| 3 |
-
size 5000245120
|
|
|
|
|
|
|
|
|
|
|
|
model-00017-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:052726cd55224d3d217dac93eaed060215b8029afb44861c0d927d87c6a046ab
|
| 3 |
+
size 3999843880
|
model-00018-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:c52af07c9c62f8329f713217c7483245fde7d94ce5223c5569539e9bafe43b50
|
| 3 |
-
size 5000256184
|
|
|
|
|
|
|
|
|
|
|
|
model-00018-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dd1b8d843e0013ed3beb6ba7fc36dc4b1f2df5d71b40520f82f369cd8fce5cba
|
| 3 |
+
size 3999843880
|
model-00019-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:fc666be81fbc0a03aa3f74fd081affddd4243c36d16f7ba3fb6a1118987b1c1f
|
| 3 |
-
size 5000245176
|
|
|
|
|
|
|
|
|
|
|
|
model-00019-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bfe5489c61334f9ce9bc6b369c3edf44420d7f585c6438f3afb98a120b26e6f3
|
| 3 |
+
size 3999844096
|
model-00020-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:a03fe3c80a570d0b7753914bd6f6dffef3c25d68cdb68982e4e736640838f8f9
|
| 3 |
-
size 4998485264
|
|
|
|
|
|
|
|
|
|
|
|
model-00020-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eba3113cbd34304751e148502f314be5ceae6e7d830bd33f2c0cf3bb8dfb28c2
|
| 3 |
+
size 3999855040
|
model-00021-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:623612584f02a299a71496c1f20a27598f6a93b165188a9e259cea45917577c4
|
| 3 |
-
size 4999918824
|
|
|
|
|
|
|
|
|
|
|
|
model-00021-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ddd72e8ada4480e86e4230639210df1039802db9f78537673c19117d2e69b95
|
| 3 |
+
size 3999843792
|
model-00022-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d24ba3084f39e424b4bbb9ee40ae973cc0c71265b8ebe85073445a63bf2f10c6
|
| 3 |
-
size 4998485208
|
|
|
|
|
|
|
|
|
|
|
|
model-00022-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6fc2bb1762d34b8db83fb11566e462c9166e132bb2bfcef45a33a4bd41c02db
|
| 3 |
+
size 3999843880
|
model-00023-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:3244c319a2a6b1ad11afee98d4c2204932d69aad02278b3d711f0d72ceeaa022
|
| 3 |
-
size 5000245288
|
|
|
|
|
|
|
|
|
|
|
|
model-00023-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:484bf8c327f7f7aaa20fbc4f6800d4aab7870749335b6f6bbfdb83570a1f66e2
|
| 3 |
+
size 3999517464
|
model-00024-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e0c3dfefd91198834b9408948917bd5c42ea539ae4cbcdc4d028310e77d61c6f
|
| 3 |
-
size 5000256056
|
|
|
|
|
|
|
|
|
|
|
|
model-00024-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cfbb94709f5dac71ffe144ee3b9a976a49a2a9b9863caa29620fbc65820c2342
|
| 3 |
+
size 3999844264
|
model-00025-of-00032.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e71007828ded7863db2b744d1ea3a36f06a773267261da4bdacf9cab76f9782b
|
| 3 |
-
size 5000245296
|
|
|
|
|
|
|
|
|
|
|
|
model-00025-of-00041.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5799d19dcccfb17b108b5349900a4c42eb7bc241149cb7d1b38b48c707070cf
|
| 3 |
+
size 4000181296
|