The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models
Paper
•
2203.07259
•
Published
•
4
This model is obtained with The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.
It corresponds to the model presented in the Table 3 - 12 Layers - 0% Sparsity - QAT, and it represents an upper bound for performance of the corresponding pruned and quantized models:
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-QAT-squadv1neuralmagic/oBERT-12-downstream-pruned-block4-80-QAT-squadv1neuralmagic/oBERT-12-downstream-pruned-unstructured-90-QAT-squadv1neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1SQuADv1 dev-set:
EM = 81.99
F1 = 89.06
Code: https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT
If you find the model useful, please consider citing our work.
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}