Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
license: mit
|
| 5 |
+
tags:
|
| 6 |
+
- quantization
|
| 7 |
+
- sparsity
|
| 8 |
+
- llm
|
| 9 |
+
- qwen2
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Optimal Brain Restoration for Joint Quantization and Sparsification of LLMs
|
| 13 |
+
|
| 14 |
+
This repository contains a compressed version of the `Llama2-70B-Instruct` model, applying the **Optimal Brain Restoration (OBR)** framework as presented in the paper [Optimal Brain Restoration for Joint Quantization and Sparsification of LLMs](https://huggingface.co/papers/2509.11177). OBR enables aggressive W4A4KV4 quantization with 50% sparsity on existing LLMs, delivering significant speedup and memory reduction.
|
| 15 |
+
|
| 16 |
+
**Code Repository:** [https://github.com/csguoh/OBR](https://github.com/csguoh/OBR)
|
| 17 |
+
|
| 18 |
+
## Paper Abstract
|
| 19 |
+
|
| 20 |
+
Recent advances in Large Language Model (LLM) compression, such as quantization and pruning, have achieved notable success. However, as these techniques gradually approach their respective limits, relying on a single method for further compression has become increasingly challenging. In this work, we explore an alternative solution by combining quantization and sparsity. This joint approach, though promising, introduces new difficulties due to the inherently conflicting requirements on weight distributions: quantization favors compact ranges, while pruning benefits from high variance. To attack this problem, we propose Optimal Brain Restoration (OBR), a general and training-free framework that aligns pruning and quantization by error compensation between both. OBR minimizes performance degradation on downstream tasks by building on a second-order Hessian objective, which is then reformulated into a tractable problem through surrogate approximation and ultimately reaches a closed-form solution via group error compensation. Experiments show that OBR enables aggressive W4A4KV4 quantization with 50% sparsity on existing LLMs, and delivers up to 4.72x speedup and 6.4x memory reduction compared to the FP16-dense baseline.
|
| 21 |
+
|
| 22 |
+
## Model Details
|
| 23 |
+
|
| 24 |
+
### Model Description
|
| 25 |
+
|
| 26 |
+
This model is a 4-bit quantized, 50% unstructured sparse version of `Qwen/Qwen2.5-7B-Instruct`. It leverages the Optimal Brain Restoration (OBR) framework, a training-free method that aligns pruning and quantization by error compensation. OBR aims to minimize performance degradation on downstream tasks by using a second-order Hessian objective, reformulated into a tractable problem via surrogate approximation and group error compensation. This specific model instance uses the FlatQuant rotation scheme.
|
| 27 |
+
|
| 28 |
+
- **Developed by:** Hang Guo, Yawei Li, Luca Benini
|
| 29 |
+
- **Model type:** Qwen2ForCausalLM (Text Generation)
|
| 30 |
+
- **Language(s) (NLP):** English (primary for evaluation benchmarks)
|
| 31 |
+
- **License:** MIT (see detailed explanation in the License section below)
|
| 32 |
+
- **Finetuned from model:** `Qwen/Qwen2.5-7B-Instruct`
|
| 33 |
+
|
| 34 |
+
### Model Sources
|
| 35 |
+
|
| 36 |
+
- **Repository:** [https://github.com/csguoh/OBR](https://github.com/csguoh/OBR)
|
| 37 |
+
- **Paper:** [https://huggingface.co/papers/2509.11177](https://huggingface.co/papers/2509.11177)
|
| 38 |
+
- **Hugging Face Collection (for OBR models):** [https://huggingface.co/collections/HangGuo/optimal-brain-resotration-689863c8687d3aeed27f9a96](https://huggingface.co/collections/HangGuo/optimal-brain-resotration-689863c8687d3aeed27f9a96)
|
| 39 |
+
|
| 40 |
+
## Uses
|
| 41 |
+
|
| 42 |
+
### Direct Use
|
| 43 |
+
|
| 44 |
+
This model is intended for fast and memory-efficient text generation tasks where a standard `Qwen2.5-7B-Instruct` model would typically be used, but with significantly reduced computational overhead and memory footprint. It is particularly suitable for environments with limited memory or computational resources, or for deploying LLMs at scale.
|
| 45 |
+
|
| 46 |
+
### Out-of-Scope Use
|
| 47 |
+
|
| 48 |
+
As a compressed model, while it aims to retain performance, aggressive compression levels might lead to subtle degradation in certain niche tasks or highly sensitive applications. Users should evaluate its performance for their specific use cases. The base model's limitations (e.g., potential biases, factual inaccuracies) also apply.
|
| 49 |
+
|
| 50 |
+
## Bias, Risks, and Limitations
|
| 51 |
+
|
| 52 |
+
The base model, `Qwen/Qwen2-70B-Instruct`, may carry inherent biases from its training data. Compression techniques like quantization and sparsification, even when carefully applied with OBR, may also introduce minor performance fluctuations compared to the full-precision, dense model.
|
| 53 |
+
|
| 54 |
+
### Recommendations
|
| 55 |
+
|
| 56 |
+
Users should be aware of the trade-offs between model size/speed and potential minor performance shifts introduced by compression. Thorough evaluation on target tasks and datasets is recommended to ensure the model meets specific requirements.
|
| 57 |
+
|
| 58 |
+
## How to Get Started with the Model
|
| 59 |
+
|
| 60 |
+
This model is compatible with the Hugging Face `transformers` library. To get started, you can load the model using `AutoModelForCausalLM` and `AutoTokenizer`.
|
| 61 |
+
|
| 62 |
+
**IMPORTANT:** For compatibility with Qwen2.5 series models, ensure your `transformers` library version is `transformers==4.45.0` or newer. You can install it via `pip install transformers==4.45.0`.
|
| 63 |
+
|
| 64 |
+
Here's a basic example for text generation:
|
| 65 |
+
|
| 66 |
+
```python
|
| 67 |
+
import torch
|
| 68 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
|
| 69 |
+
|
| 70 |
+
# This model ID is part of the OBR Hugging Face collection, specifically for FlatQuant on Qwen2.5
|
| 71 |
+
model_id = "HangGuo/QWen2.5-7B-FlatQuant-OBR-GPTQ-W4A4KV4S50"
|
| 72 |
+
|
| 73 |
+
# Load tokenizer and model
|
| 74 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
| 75 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 76 |
+
model_id,
|
| 77 |
+
torch_dtype=torch.bfloat16, # Use bfloat16 as indicated in config.json
|
| 78 |
+
device_map="auto",
|
| 79 |
+
trust_remote_code=True
|
| 80 |
+
)
|
| 81 |
+
model.eval()
|
| 82 |
+
|
| 83 |
+
# Load generation configuration from the model's own generation_config.json
|
| 84 |
+
generation_config = GenerationConfig.from_pretrained(model_id)
|
| 85 |
+
|
| 86 |
+
# Example prompt for an instruct model using Qwen's chat template
|
| 87 |
+
messages = [
|
| 88 |
+
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
|
| 89 |
+
{"role": "user", "content": "Explain the concept of quantum entanglement in simple terms."}
|
| 90 |
+
]
|
| 91 |
+
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
|
| 92 |
+
|
| 93 |
+
with torch.no_grad():
|
| 94 |
+
output_ids = model.generate(
|
| 95 |
+
input_ids,
|
| 96 |
+
generation_config=generation_config,
|
| 97 |
+
max_new_tokens=512, # You can override defaults from generation_config.json if needed
|
| 98 |
+
# Other generation parameters can be passed here or set in generation_config
|
| 99 |
+
)
|
| 100 |
+
|
| 101 |
+
generated_text = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
|
| 102 |
+
print(f"Prompt: {messages[-1]['content']}
|
| 103 |
+
Generated: {generated_text}")
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
For more detailed usage, including how to apply OBR to other base models (like Llama2, Mixtral) and integrate it into evaluation pipelines, please refer to the [official GitHub repository's "Get Started" section](https://github.com/csguoh/OBR#get_started).
|
| 107 |
+
|
| 108 |
+
## Training Details
|
| 109 |
+
|
| 110 |
+
### Training Data
|
| 111 |
+
|
| 112 |
+
The base model, `Qwen/Qwen2.5-7B-Instruct`, was trained on various datasets. The Optimal Brain Restoration (OBR) method is a post-training compression technique and does not involve additional training data for the compression process itself. However, it relies on a small calibration dataset (e.g., WikiText) to restore performance and optimize quantization/sparsity.
|
| 113 |
+
|
| 114 |
+
### Training Procedure
|
| 115 |
+
|
| 116 |
+
The OBR framework is training-free. The procedure involves applying pruning and quantization and then compensating for the induced errors using a second-order Hessian objective. Specific parameters for quantization (W4A4KV4) and sparsity (50%) are detailed in the paper and the GitHub repository.
|
| 117 |
+
|
| 118 |
+
## Evaluation
|
| 119 |
+
|
| 120 |
+
### Testing Data, Factors & Metrics
|
| 121 |
+
|
| 122 |
+
Evaluation was conducted on standard benchmarks for LLMs, including WikiText perplexity and various zero-shot accuracy tasks. The paper provides detailed quantitative results and efficiency comparisons (runtime, FLOPs, TOPS).
|
| 123 |
+
|
| 124 |
+
### Results
|
| 125 |
+
|
| 126 |
+
Experiments show that OBR enables aggressive W4A4KV4 quantization with 50% sparsity on existing LLMs, delivering up to 4.72x speedup and 6.4x memory reduction compared to the FP16-dense baseline, while maintaining strong performance on downstream tasks.
|
| 127 |
+
|
| 128 |
+
## Citation
|
| 129 |
+
|
| 130 |
+
If you find our work useful or helpful for your research, please feel free to cite our paper:
|
| 131 |
+
|
| 132 |
+
```bibtex
|
| 133 |
+
@article{guo2025optimal,
|
| 134 |
+
title={Optimal Brain Restoration for Joint Quantization and Sparsification of LLMs},
|
| 135 |
+
author={Hang Guo and Yawei Li and Luca Benini},
|
| 136 |
+
year={2025},
|
| 137 |
+
journal={arXiv preprint arXiv:2509.11177},
|
| 138 |
+
eprint={2509.11177},
|
| 139 |
+
archivePrefix={arXiv},
|
| 140 |
+
primaryClass={cs.CL},
|
| 141 |
+
url={http://arxiv.org/abs/2509.11177},
|
| 142 |
+
}
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
## License
|
| 146 |
+
|
| 147 |
+
This work is based on previous works including [QuaRot](https://github.com/spcl/QuaRot), [SpinQuant](https://github.com/facebookresearch/SpinQuant), and [FlatQuant](https://github.com/ruikangliu/FlatQuant). Users should follow the license of the corresponding backbone models.
|
| 148 |
+
|
| 149 |
+
For this specific model (`HangGuo/Llama-70B-QuaRot-OBR-GPTQ-W4A4KV4S50`), which is compressed using the QuaRot method, please refer to the QuaRot GitHub repository for full license details.
|
| 150 |
+
|
| 151 |
+
## Model Card Contact
|
| 152 |
+
|
| 153 |
+
For any questions, feel free to contact [email protected].
|