Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ tags:
|
|
| 16 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
| 17 |
|
| 18 |
## llama.cpp quantization
|
| 19 |
-
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">
|
| 20 |
Original model: https://huggingface.co/Qwen/Qwen3-0.6B
|
| 21 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
| 22 |
## Prompt format
|
|
|
|
| 16 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
| 17 |
|
| 18 |
## llama.cpp quantization
|
| 19 |
+
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization.
|
| 20 |
Original model: https://huggingface.co/Qwen/Qwen3-0.6B
|
| 21 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
| 22 |
## Prompt format
|