Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,7 @@ tags:
|
|
| 15 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
| 16 |
|
| 17 |
Directly quantized 4bit model with `bitsandbytes`.
|
|
|
|
| 18 |
|
| 19 |
We have a Google Colab Tesla T4 notebook for Mistral 7b v2 (32K context length) here: https://colab.research.google.com/drive/1Fa8QVleamfNELceNM9n7SeAGr_hT5XIn?usp=sharing
|
| 20 |
|
|
|
|
| 15 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
| 16 |
|
| 17 |
Directly quantized 4bit model with `bitsandbytes`.
|
| 18 |
+
Original source: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/tree/main used to create the 4bit quantized versions.
|
| 19 |
|
| 20 |
We have a Google Colab Tesla T4 notebook for Mistral 7b v2 (32K context length) here: https://colab.research.google.com/drive/1Fa8QVleamfNELceNM9n7SeAGr_hT5XIn?usp=sharing
|
| 21 |
|