Update README.md
Browse files
README.md
CHANGED
|
@@ -5,41 +5,49 @@ base_model:
|
|
| 5 |
datasets:
|
| 6 |
- lmms-lab/LLaVA-One-Vision-1.5-Mid-Training-85M
|
| 7 |
- lmms-lab/LLaVA-OneVision-1.5-Insturct-Data
|
|
|
|
| 8 |
library_name: transformers
|
| 9 |
license: apache-2.0
|
| 10 |
pipeline_tag: image-text-to-text
|
| 11 |
---
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
📚 [Paper](https://huggingface.co/papers/2509.23661) | 💻 [Code](https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5) | 🏠 [Project Page](https://huggingface.co/collections/lmms-lab/llava-onevision-15-68d385fe73b50bd22de23713) | 🚀 [Demo](https://huggingface.co/spaces/lmms-lab/LLaVA-OneVision-1.5)
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
1. **Superior Performance**
|
| 24 |
-
A family of fully open-source large multimodal models demonstrating **superior performance** across multiple multimodal benchmarks, **outperforming Qwen2.5-VL** in most evaluation tasks.
|
| 25 |
|
| 26 |
-
2. **High-Quality Data at Scale**
|
| 27 |
-
Meticulously curated **mid-training and SFT data** with rigorous filtering and quality control.
|
| 28 |
-
- Concept-balanced, highly diverse, high-quality caption data
|
| 29 |
-
- Comprehensive instruction fine-tuning data covering a wide range of tasks
|
| 30 |
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
4. **Fully Open Framework** for community access and reproducibility:
|
| 38 |
-
- ✅ High-quality mid-training & SFT data
|
| 39 |
-
- ✅ Complete training framework & code
|
| 40 |
-
- ✅ Training recipes & configurations
|
| 41 |
-
- ✅ Base & instruct model checkpoints
|
| 42 |
-
- ✅ Comprehensive training logs & metrics
|
| 43 |
|
| 44 |
## Models
|
| 45 |
|
|
|
|
| 5 |
datasets:
|
| 6 |
- lmms-lab/LLaVA-One-Vision-1.5-Mid-Training-85M
|
| 7 |
- lmms-lab/LLaVA-OneVision-1.5-Insturct-Data
|
| 8 |
+
- HuggingFaceM4/FineVision
|
| 9 |
library_name: transformers
|
| 10 |
license: apache-2.0
|
| 11 |
pipeline_tag: image-text-to-text
|
| 12 |
---
|
| 13 |
|
| 14 |
+
<div align="center">
|
| 15 |
|
| 16 |
+
<h1>LLaVA-OneVision-1.5: Fully Open-Source State-of-the-Art VLM Model</h1>
|
| 17 |
|
|
|
|
| 18 |
|
| 19 |
+
<p>
|
| 20 |
+
<a href="https://huggingface.co/papers/2509.23661">
|
| 21 |
+
<img alt="Paper" src="https://img.shields.io/badge/Paper-b31b1b?style=for-the-badge&logo=arXiv&logoColor=white">
|
| 22 |
+
</a>
|
| 23 |
+
<a href="https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5">
|
| 24 |
+
<img alt="Code" src="https://img.shields.io/badge/Code-181717?style=for-the-badge&logo=github&logoColor=white">
|
| 25 |
+
</a>
|
| 26 |
+
</p>
|
| 27 |
|
| 28 |
+
</div>
|
| 29 |
|
|
|
|
|
|
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
+
## Introduction
|
| 33 |
+
|
| 34 |
+
LLaVA-OneVision-1.5 is a fully open-source family of large multimodal models (LMMs) built to democratize multimodal training. Trained on native‑resolution images, it delivers state‑of‑the‑art performance at substantially lower cost. The project also releases high‑quality pretraining and SFT data, a complete and efficient training framework with recipes and configs, and comprehensive logs to support transparent, reproducible research.
|
| 35 |
+
#### **Superior Performance**
|
| 36 |
+
- The model leads on multiple multimodal benchmarks and generally surpasses Qwen2.5-VL.
|
| 37 |
+
- Training on native-resolution images significantly improves its visual understanding.
|
| 38 |
+
|
| 39 |
+
#### **High-Quality Data at Scale**
|
| 40 |
+
- The pretraining corpus comprises large-scale, concept-balanced, diverse, and high-quality captions curated with strict filtering and quality control.
|
| 41 |
+
- The instruction-tuning dataset is comprehensive and covers a wide range of tasks.
|
| 42 |
+
|
| 43 |
+
#### **Ultra-Efficient Training Framework**
|
| 44 |
+
- The end-to-end training cost is about $16,000 on A100 GPUs at roughly $0.60 per GPU-hour.
|
| 45 |
+
- The system is built on Megatron-LM with support for MoE, FP8, and long-sequence parallelism, and the codebase is optimized for cost-effective scaling.
|
| 46 |
+
|
| 47 |
+
#### **Fully Open Framework**
|
| 48 |
+
- The project releases high-quality pretraining and SFT datasets along with the complete training framework, configurations, and recipes.
|
| 49 |
+
- It also provides detailed training logs and metrics to enable reproducibility and community adoption.
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
## Models
|
| 53 |
|