Triangle104 commited on
Commit
7352ac6
·
verified ·
1 Parent(s): ea308d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -15,6 +15,36 @@ tags:
15
  This model was converted to GGUF format from [`THUDM/GLM-Z1-9B-0414`](https://huggingface.co/THUDM/GLM-Z1-9B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`THUDM/GLM-Z1-9B-0414`](https://huggingface.co/THUDM/GLM-Z1-9B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414) for more details on the model.
17
 
18
+ ---
19
+ Introduction
20
+ -
21
+ The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
22
+ series, featuring 32 billion parameters. Its performance is comparable
23
+ to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
24
+ user-friendly local deployment features. GLM-4-32B-Base-0414 was
25
+ pre-trained on 15T of high-quality data, including a large amount of
26
+ reasoning-type synthetic data, laying the foundation for subsequent
27
+ reinforcement learning extensions. In the post-training stage, in
28
+ addition to human preference alignment for dialogue scenarios, we also
29
+ enhanced the model's performance in instruction following, engineering
30
+ code, and function calling using techniques such as rejection sampling
31
+ and reinforcement learning, strengthening the atomic capabilities
32
+ required for agent tasks. GLM-4-32B-0414 achieves good results in areas
33
+ such as engineering code, Artifact generation, function calling,
34
+ search-based Q&A, and report generation. Some benchmarks even rival
35
+ larger models like GPT-4o and DeepSeek-V3-0324 (671B).
36
+
37
+ GLM-Z1-9B-0414 is a surprise. We employed the
38
+ aforementioned series of techniques to train a 9B small-sized model that
39
+ maintains the open-source tradition. Despite its smaller scale,
40
+ GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical
41
+ reasoning and general tasks. Its overall performance is already at a
42
+ leading level among open-source models of the same size. Especially in
43
+ resource-constrained scenarios, this model achieves an excellent balance
44
+ between efficiency and effectiveness, providing a powerful option for
45
+ users seeking lightweight deployment
46
+
47
+ ---
48
  ## Use with llama.cpp
49
  Install llama.cpp through brew (works on Mac and Linux)
50