fuzzy-mittenz commited on
Commit
1255b3f
·
verified ·
1 Parent(s): 2a5532a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -5,7 +5,6 @@ tags:
5
  - mergekit
6
  - merge
7
  - llama-cpp
8
- - gguf-my-repo
9
  license: apache-2.0
10
  model-index:
11
  - name: Qwen2.5-Dyanka-7B-Preview
@@ -109,7 +108,7 @@ model-index:
109
  ---
110
 
111
  # fuzzy-mittenz/Qwen2.5-Dyanka-7B-Preview-Q4_K_M-GGUF
112
- This model was converted to GGUF format from [`Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview`](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
  Refer to the [original model card](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) for more details on the model.
114
 
115
  ## Use with llama.cpp
 
5
  - mergekit
6
  - merge
7
  - llama-cpp
 
8
  license: apache-2.0
9
  model-index:
10
  - name: Qwen2.5-Dyanka-7B-Preview
 
108
  ---
109
 
110
  # fuzzy-mittenz/Qwen2.5-Dyanka-7B-Preview-Q4_K_M-GGUF
111
+ This model was converted to GGUF format from [`Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview`](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) using llama.cpp
112
  Refer to the [original model card](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) for more details on the model.
113
 
114
  ## Use with llama.cpp