# Quantization Dependencies # Core quantization libraries torchao>=0.1.0 bitsandbytes>=0.41.0 # Transformers with quantization support transformers>=4.36.0 # Hugging Face Hub for model pushing huggingface_hub>=0.19.0 # Optional: For better performance accelerate>=0.24.0 safetensors>=0.4.0 # Optional: For monitoring datasets>=2.14.0