--- language: - en - zh library_name: mlx license: mit pipeline_tag: text-generation base_model: zai-org/GLM-4.6 tags: - mlx --- ** CURRENTLY UPLOADING ** **See GLM-4.6 6.5bit MLX in action - [demonstration video - coming soon](https://www.youtube.com/xcreate)** *q6.5bit quant typically achieves the highest perplexity in our testing* | Quantization | Perplexity | |:------------:|:----------:| | **q2.5** | 41.293 | | **q3.5** | 1.900 | | **q4.5** | 1.168 | | **q5.5** | 1.141 | | **q6.5** | 1.128 | | **q8.5** | 1.128 | ## Usage Notes * Runs on a single M3 Ultra 512GB RAM using [Inferencer app](https://inferencer.com) * Memory usage: ~360 GB * Expect ~16 tokens/s * Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.27 * For more details see [demonstration video - coming soon](https://www.youtube.com/xcreate) or visit [GLM-4.6](https://huggingface.co/zai-org/GLM-4.6). ## Disclaimer We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.