Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ tags:
|
|
| 8 |
- llama
|
| 9 |
- llama2
|
| 10 |
base_model: meta-llama/Llama-2-7b-chat-hf
|
| 11 |
-
model_name: Llama-2-7b-hf-AutoGPTQ
|
| 12 |
library:
|
| 13 |
- Transformers
|
| 14 |
- GPTQ
|
|
@@ -17,7 +17,7 @@ pipeline_tag: text-generation
|
|
| 17 |
qunatized_by: twhoool02
|
| 18 |
---
|
| 19 |
|
| 20 |
-
# Model Card for twhoool02/Llama-2-7b-hf-AutoGPTQ
|
| 21 |
|
| 22 |
## Model Details
|
| 23 |
|
|
@@ -26,7 +26,7 @@ This model is a GPTQ quantized version of the meta-llama/Llama-2-7b-chat-hf mode
|
|
| 26 |
- **Developed by:** Ted Whooley
|
| 27 |
- **Library:** Transformers, GPTQ
|
| 28 |
- **Model type:** llama
|
| 29 |
-
- **Model name:** Llama-2-7b-hf-AutoGPTQ
|
| 30 |
- **Pipeline tag:** text-generation
|
| 31 |
- **Qunatized by:** twhoool02
|
| 32 |
- **Language(s) (NLP):** en
|
|
|
|
| 8 |
- llama
|
| 9 |
- llama2
|
| 10 |
base_model: meta-llama/Llama-2-7b-chat-hf
|
| 11 |
+
model_name: Llama-2-7b-chat-hf-AutoGPTQ
|
| 12 |
library:
|
| 13 |
- Transformers
|
| 14 |
- GPTQ
|
|
|
|
| 17 |
qunatized_by: twhoool02
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# Model Card for twhoool02/Llama-2-7b-chat-hf-AutoGPTQ
|
| 21 |
|
| 22 |
## Model Details
|
| 23 |
|
|
|
|
| 26 |
- **Developed by:** Ted Whooley
|
| 27 |
- **Library:** Transformers, GPTQ
|
| 28 |
- **Model type:** llama
|
| 29 |
+
- **Model name:** Llama-2-7b-chat-hf-AutoGPTQ
|
| 30 |
- **Pipeline tag:** text-generation
|
| 31 |
- **Qunatized by:** twhoool02
|
| 32 |
- **Language(s) (NLP):** en
|