AiAsistent commited on
Commit
a8ae166
·
verified ·
1 Parent(s): a6565dc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +231 -199
README.md CHANGED
@@ -1,199 +1,231 @@
1
- ---
2
- library_name: transformers
3
- tags: []
4
- ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: image-text-to-text
8
+ tags:
9
+ - heretic
10
+ - uncensored
11
+ - decensored
12
+ - abliterated
13
+ - heretic
14
+ - uncensored
15
+ - decensored
16
+ - abliterated
17
+ ---
18
+ # This is a decensored version of [AiAsistent/GLM-4.6V-Flash-heretic](https://huggingface.co/AiAsistent/GLM-4.6V-Flash-heretic), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
19
+
20
+ ## Abliteration parameters
21
+
22
+ | Parameter | Value |
23
+ | :-------- | :---: |
24
+ | **direction_index** | per layer |
25
+ | **attn.o_proj.max_weight** | 1.47 |
26
+ | **attn.o_proj.max_weight_position** | 27.57 |
27
+ | **attn.o_proj.min_weight** | 0.13 |
28
+ | **attn.o_proj.min_weight_distance** | 21.52 |
29
+ | **mlp.down_proj.max_weight** | 1.26 |
30
+ | **mlp.down_proj.max_weight_position** | 24.56 |
31
+ | **mlp.down_proj.min_weight** | 0.19 |
32
+ | **mlp.down_proj.min_weight_distance** | 22.61 |
33
+
34
+ ## Performance
35
+
36
+ | Metric | This model | Original model ([AiAsistent/GLM-4.6V-Flash-heretic](https://huggingface.co/AiAsistent/GLM-4.6V-Flash-heretic)) |
37
+ | :----- | :--------: | :---------------------------: |
38
+ | **KL divergence** | 0.0001 | 0 *(by definition)* |
39
+ | **Refusals** | 52/100 | 68/100 |
40
+
41
+ -----
42
+
43
+
44
+ # GLM-4.6V-Flash-Heretic Update Notice
45
+
46
+ This is a custom modification of the GLM-4.6V-Flash model created personally by **AlexH**. Please note that this customization is **not part of the official release**.
47
+
48
+ The update includes unique enhancements that improve flexibility, reduce refusals, and expand the model’s responsiveness in extreme or complex scenarios.
49
+
50
+ For a detailed explanation of how this update works, access to the code, and ongoing support, visit:
51
+ [Heretic LLM Universal Support for New Models via Dynamic Auto-Registration](https://llmresearch.net/threads/heretic-llm-universal-support-for-new-models-via-dynamic-auto-registration.275/)
52
+
53
+ This custom version demonstrates the power of user-driven modifications to safely extend the capabilities of large language models. By applying advanced configuration tweaks and dynamic auto-registration, the model now provides a more robust and versatile experience while maintaining full compatibility with the Heretic ecosystem.
54
+
55
+
56
+ # This is a decensored version of [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
57
+
58
+ ## Abliteration parameters
59
+
60
+ | Parameter | Value |
61
+ | :-------- | :---: |
62
+ | **direction_index** | 22.89 |
63
+ | **attn.o_proj.max_weight** | 1.45 |
64
+ | **attn.o_proj.max_weight_position** | 28.07 |
65
+ | **attn.o_proj.min_weight** | 1.40 |
66
+ | **attn.o_proj.min_weight_distance** | 13.38 |
67
+ | **mlp.down_proj.max_weight** | 1.19 |
68
+ | **mlp.down_proj.max_weight_position** | 24.88 |
69
+ | **mlp.down_proj.min_weight** | 0.82 |
70
+ | **mlp.down_proj.min_weight_distance** | 10.68 |
71
+
72
+ ## Performance
73
+
74
+ | Metric | This model | Original model ([zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash)) |
75
+ | :----- | :--------: | :---------------------------: |
76
+ | **KL divergence** | 0.0000 | 0 *(by definition)* |
77
+ | **Refusals** | 63/100 | 100/100 |
78
+
79
+ -----
80
+
81
+
82
+ # GLM-4.6V
83
+
84
+ <div align="center">
85
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
86
+ </div>
87
+
88
+ This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning](https://huggingface.co/papers/2507.01006).
89
+
90
+ - **GLM-4.6V Blog**: [https://z.ai/blog/glm-4.6v](https://z.ai/blog/glm-4.6v)
91
+ - **Paper**: [https://huggingface.co/papers/2507.01006](https://huggingface.co/papers/2507.01006)
92
+ - **GitHub Repository**: [https://github.com/zai-org/GLM-V](https://github.com/zai-org/GLM-V)
93
+ - **Online Demo**: [https://chat.z.ai/](https://chat.z.ai/)
94
+ - **API Access**: [Z.ai Open Platform](https://docs.z.ai/guides/vlm/glm-4.6v)
95
+ - **Desktop Assistant App**: [https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App](https://huggingface.co/spaces/zai-org/GLM-4.5V-Demo-App)
96
+
97
+ ## Introduction
98
+
99
+ GLM-4.6V series model includes two versions: GLM-4.6V (106B), a foundation model designed for cloud and high-performance
100
+ cluster scenarios,
101
+ and GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications.
102
+ GLM-4.6V scales its context window to 128k tokens in training,
103
+ and achieves SoTA performance in visual understanding among models of similar parameter scales.
104
+ Crucially, we integrate native Function Calling capabilities for the first time.
105
+ This effectively bridges the gap between "visual perception" and "executable action"
106
+ providing a unified technical foundation for multimodal agents in real-world business scenarios.
107
+
108
+ ![GLM-4.6V Benchmarks](https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/bench_46v.jpeg)
109
+
110
+ Beyond achieves SoTA performance across major multimodal benchmarks at comparable model scales. GLM-4.6V introduces
111
+ several key features:
112
+
113
+ - **Native Multimodal Function Calling**
114
+ Enables native vision-driven tool use. Images, screenshots, and document pages can be passed directly as tool inputs without text conversion, while visual outputs (charts, search images, rendered pages) are interpreted and integrated into the reasoning chain. This closes the loop from perception to understanding to execution.
115
+
116
+ - **Interleaved Image-Text Content Generation**
117
+ Supports high-quality mixed media creation from complex multimodal inputs. GLM-4.6V takes a multimodal context—spanning documents, user inputs, and tool-retrieved images—and synthesizes coherent, interleaved image-text content tailored to the task. During generation it can actively call search and retrieval tools to gather and curate additional text and visuals, producing rich, visually grounded content.
118
+
119
+
120
+ - **Multimodal Document Understanding**
121
+ GLM-4.6V can process up to 128K tokens of multi-document or long-document input, directly interpreting richly formatted pages as images. It understands text, layout, charts, tables, and figures jointly, enabling accurate comprehension of complex, image-heavy documents without requiring prior conversion to plain text.
122
+
123
+ - **Frontend Replication & Visual Editing**
124
+ Reconstructs pixel-accurate HTML/CSS from UI screenshots and supports natural-language-driven edits. It detects layout, components, and styles visually, generates clean code, and applies iterative visual modifications through simple user instructions.
125
+
126
+
127
+ **This Hugging Face repository hosts the `GLM-4.6V-Flash` model, part of the `GLM-V` series.**
128
+
129
+ ## Usage
130
+
131
+ ### Environment Installation
132
+
133
+ For `SGLang`:
134
+
135
+ ```bash
136
+ pip install sglang>=0.5.6.post1
137
+ pip install nvidia-cudnn-cu12==9.16.0.29
138
+ sudo apt update
139
+ sudo apt install ffmpeg
140
+ ```
141
+
142
+ For `vLLM`:
143
+
144
+ ```bash
145
+ pip install vllm>=0.12.0
146
+ pip install transformers>=5.0.0rc0
147
+ ```
148
+
149
+ ### Quick Start with Transformers
150
+
151
+ ```python
152
+ from transformers import AutoProcessor, Glm4vForConditionalGeneration
153
+ import torch
154
+
155
+ MODEL_PATH = "zai-org/GLM-4.6V-Flash"
156
+ messages = [
157
+ {
158
+ "role": "user",
159
+ "content": [
160
+ {
161
+ "type": "image",
162
+ "url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
163
+ },
164
+ {
165
+ "type": "text",
166
+ "text": "describe this image"
167
+ }
168
+ ],
169
+ }
170
+ ]
171
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
172
+ model = Glm4vForConditionalGeneration.from_pretrained(
173
+ pretrained_model_name_or_path=MODEL_PATH,
174
+ torch_dtype="auto",
175
+ device_map="auto",
176
+ )
177
+ inputs = processor.apply_chat_template(
178
+ messages,
179
+ tokenize=True,
180
+ add_generation_prompt=True,
181
+ return_dict=True,
182
+ return_tensors="pt"
183
+ ).to(model.device)
184
+ inputs.pop("token_type_ids", None)
185
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
186
+ output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
187
+ print(output_text)
188
+ ```
189
+
190
+ ## Evaluation Settings
191
+
192
+ We primarily use vLLM as the backend for model inference. For faster and more reliable performance on video tasks, we employ SGLang. To reproduce our leaderboard results, we recommend the following decoding parameters:
193
+
194
+ + top_p: 0.6
195
+ + top_k: 2
196
+ + temperature: 0.8
197
+ + repetition_penalty: 1.1
198
+ + max_generate_tokens: 16K
199
+
200
+ For more usage details, please refer to Our [Github](https://github.com/zai-org/GLM-V).
201
+
202
+
203
+
204
+ ## Fixed and Remaining Issues
205
+
206
+ Since the open-sourcing of GLM-4.1V, we have received extensive feedback from the community and are well aware that the model still has many shortcomings. In subsequent iterations, we attempted to address several common issues — such as repetitive thinking outputs and formatting errors — which have been mitigated to some extent in this new version.
207
+
208
+ However, the model still has several limitations and issues that we will fix as soon as possible:
209
+
210
+ 1. Pure text QA capabilities still have significant room for improvement. In this development cycle, our primary focus was on visual multimodal scenarios, and we will enhance pure text abilities in upcoming updates.
211
+ 2. The model may still overthink or even repeat itself in certain cases, especially when dealing with complex prompts.
212
+ 3. In some situations, the model may restate the answer again at the end.
213
+ 4. There remain certain perception limitations, such as counting accuracy and identifying specific individuals, which still require improvement.
214
+
215
+ Thank you for your patience and understanding. We also welcome feedback and suggestions in the issue section — we will respond and improve as much as we can!
216
+
217
+ ## Citation
218
+
219
+ If you use this model, please cite the following paper:
220
+
221
+ ```bibtex
222
+ @misc{vteam2025glm45vglm41vthinkingversatilemultimodal,
223
+ title={GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
224
+ author={V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Bin Chen and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiale Zhu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingdao Liu and Mingde Xu and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Tianyu Tong and Wenkai Li and Wei Jia and Xiao Liu and Xiaohan Zhang and Xin Lyu and Xinyue Fan and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yanzi Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuting Wang and Yu Wang and Yuxuan Zhang and Zhao Xue and Zhenyu Hou and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
225
+ year={2025},
226
+ eprint={2507.01006},
227
+ archivePrefix={arXiv},
228
+ primaryClass={cs.CV},
229
+ url={https://arxiv.org/abs/2507.01006},
230
+ }
231
+ ```