--- base_model: CohereForAI/c4ai-command-r7b-arabic-02-2025 language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi library_name: transformers license: cc-by-nc-4.0 tags: - llama-cpp - matrixportal inference: false extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time. extra_gated_fields: Name: text Affiliation: text Country: country I agree to use this model for non-commercial use ONLY: checkbox --- # matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF This model was converted to GGUF format from [`CohereForAI/c4ai-command-r7b-arabic-02-2025`](https://huggingface.co/CohereForAI/c4ai-command-r7b-arabic-02-2025) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space. Refer to the [original model card](https://huggingface.co/CohereForAI/c4ai-command-r7b-arabic-02-2025) for more details on the model. ## βœ… Quantized Models Download List ### πŸ” Recommended Quantizations - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q4_k_m.gguf) (Best balance of speed/quality) - **πŸ“± ARM Devices:** [`Q4_0`](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q4_0.gguf) (Optimized for ARM CPUs) - **πŸ† Maximum Quality:** [`Q8_0`](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q8_0.gguf) (Near-original quality) ### πŸ“¦ Full Quantization Options | πŸš€ Download | πŸ”’ Type | πŸ“ Notes | |:---------|:-----|:------| | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) πŸ† | Very good quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚑ | Fast, best quality | | [Download](https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/C4AI-Command-R7B-Arabic-02-2025-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy | πŸ’‘ **Tip:** Use `F16` for maximum precision when quality is critical # GGUF Model Quantization & Usage Guide with llama.cpp ## What is GGUF and Quantization? **GGUF** (GPT-Generated Unified Format) is an efficient model file format developed by the `llama.cpp` team that: - Supports multiple quantization levels - Works cross-platform - Enables fast loading and inference **Quantization** converts model weights to lower precision data types (e.g., 4-bit integers instead of 32-bit floats) to: - Reduce model size - Decrease memory usage - Speed up inference - (With minor accuracy trade-offs) ## Step-by-Step Guide ### 1. Prerequisites ```bash # System updates sudo apt update && sudo apt upgrade -y # Dependencies sudo apt install -y build-essential cmake python3-pip # Clone and build llama.cpp git clone https://github.com/ggerganov/llama.cpp cd llama.cpp make -j4 ``` ### 2. Using Quantized Models from Hugging Face My automated quantization script produces models in this format: ``` https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf ``` Download your quantized model directly: ```bash wget https://huggingface.co/matrixportal/c4ai-command-r7b-arabic-02-2025-GGUF/resolve/main/c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf ``` ### 3. Running the Quantized Model Basic usage: ```bash ./main -m c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf -p "Your prompt here" -n 128 ``` Example with a creative writing prompt: ```bash ./main -m c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf -p "[INST] Write a short poem about AI quantization in the style of Shakespeare [/INST]" -n 256 -c 2048 -t 8 --temp 0.7 ``` Advanced parameters: ```bash ./main -m c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf -p "Question: What is the GGUF format? Answer:" -n 256 -c 2048 -t 8 --temp 0.7 --top-k 40 --top-p 0.9 ``` ### 4. Python Integration Install the Python package: ```bash pip install llama-cpp-python ``` Example script: ```python from llama_cpp import Llama # Initialize the model llm = Llama( model_path="c4ai-command-r7b-arabic-02-2025-q4_k_m.gguf", n_ctx=2048, n_threads=8 ) # Run inference response = llm( "[INST] Explain GGUF quantization to a beginner [/INST]", max_tokens=256, temperature=0.7, top_p=0.9 ) print(response["choices"][0]["text"]) ``` ## Performance Tips 1. **Hardware Utilization**: - Set thread count with `-t` (typically CPU core count) - Compile with CUDA/OpenCL for GPU support 2. **Memory Optimization**: - Lower quantization (like q4_k_m) uses less RAM - Adjust context size with `-c` parameter 3. **Speed/Accuracy Balance**: - Higher bit quantization is slower but more accurate - Reduce randomness with `--temp 0` for consistent results ## FAQ **Q: What quantization levels are available?** A: Common options include q4_0, q4_k_m, q5_0, q5_k_m, q8_0 **Q: How much performance loss occurs with q4_k_m?** A: Typically 2-5% accuracy reduction but 4x smaller size **Q: How to enable GPU support?** A: Build with `make LLAMA_CUBLAS=1` for NVIDIA GPUs ## Useful Resources 1. [llama.cpp GitHub](https://github.com/ggerganov/llama.cpp) 2. [GGUF Format Specs](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) 3. [Hugging Face Model Hub](https://huggingface.co/models)