Update README.md
Browse files
README.md
CHANGED
|
@@ -31,7 +31,6 @@ This is the official Hugging Face model card for **LLaVA-OneVision-1.5**, a nove
|
|
| 31 |
3. **Ultra-Efficient Training Framework**
|
| 32 |
Complete end-to-end training framework designed for maximum efficiency:
|
| 33 |
- **$16K total budget** for full model training
|
| 34 |
-
- **45% HFU efficiency** on A100 GPUs ($0.6 per GPU/Hour)
|
| 35 |
- Built on **MegatronLM** with support for **MoE**, **FP8**, and **long sequence parallelization**
|
| 36 |
- Optimized codebase for cost-effective scaling
|
| 37 |
|
|
|
|
| 31 |
3. **Ultra-Efficient Training Framework**
|
| 32 |
Complete end-to-end training framework designed for maximum efficiency:
|
| 33 |
- **$16K total budget** for full model training
|
|
|
|
| 34 |
- Built on **MegatronLM** with support for **MoE**, **FP8**, and **long sequence parallelization**
|
| 35 |
- Optimized codebase for cost-effective scaling
|
| 36 |
|