Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ base_model: meta-llama/Llama-3.3-70B-Instruct
|
|
| 7 |
|
| 8 |
# Llama-3.3-70B-Instruct-FP8-KV
|
| 9 |
- ## Introduction
|
| 10 |
-
This model was
|
| 11 |
- ## Quantization Stragegy
|
| 12 |
- ***Quantized Layers***: All linear layers excluding "lm_head"
|
| 13 |
- ***Weight***: FP8 symmetric per-tensor
|
|
|
|
| 7 |
|
| 8 |
# Llama-3.3-70B-Instruct-FP8-KV
|
| 9 |
- ## Introduction
|
| 10 |
+
This model was built with Llama by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
|
| 11 |
- ## Quantization Stragegy
|
| 12 |
- ***Quantized Layers***: All linear layers excluding "lm_head"
|
| 13 |
- ***Weight***: FP8 symmetric per-tensor
|