Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct GGUF Quantizations π
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations π
| Quantization Type | File | Size |
|---|---|---|
| IQ4_XS | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-IQ4_XS.gguf | 713.72 MB |
| Q2_K | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q2_K.gguf | 553.97 MB |
| Q3_K_L | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q3_K_L.gguf | 698.59 MB |
| Q3_K_M | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q3_K_M.gguf | 658.84 MB |
| Q3_K_S | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q3_K_S.gguf | 611.97 MB |
| Q4_K_M | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q4_K_M.gguf | 770.28 MB |
| Q4_K_S | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q4_K_S.gguf | 739.72 MB |
| Q5_K_M | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q5_K_M.gguf | 869.28 MB |
| Q5_K_S | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q5_K_S.gguf | 851.22 MB |
| Q6_K | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q6_K.gguf | 974.47 MB |
| Q8_0 | Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-Q8_0.gguf | 1259.88 MB |
β‘ Powered by Featherless AI
Key Features
- π₯ Instant Hosting - Deploy any Llama model on HuggingFace instantly
- π οΈ Zero Infrastructure - No server setup or maintenance required
- π Vast Compatibility - Support for 2400+ models and counting
- π Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models
- Downloads last month
- 222
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for featherless-ai-quants/Vikhrmodels-Vikhr-Llama-3.2-1B-Instruct-GGUF
Base model
meta-llama/Llama-3.2-1B-Instruct
Finetuned
Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct
