AndesVL is a suite of mobile-optimized Multimodal Large Language Models (MLLMs) with 0.6B to 4B parameters.
AI & ML interests
None defined yet.
Recent Activity
View all activity
Papers
DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents
A^2FM: An Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning
Organization Card
Edit this README.md markdown file to author your organization card.
models
21
OPPOer/AndesVL-2B-Thinking
Image-Text-to-Text
•
Updated
•
41
•
5
OPPOer/AndesVL-1B-Thinking
Image-Text-to-Text
•
Updated
•
38
•
4
OPPOer/AndesVL-0_6B-Thinking
Image-Text-to-Text
•
Updated
•
40
•
4
OPPOer/AndesVL-1B-Instruct
Image-Text-to-Text
•
Updated
•
97
•
5
OPPOer/AndesVL-0_6B-Instruct
Image-Text-to-Text
•
Updated
•
153
•
5
OPPOer/AndesVL-4B-Thinking
Image-Text-to-Text
•
Updated
•
105
•
11
OPPOer/AndesVL-4B-Instruct
Image-Text-to-Text
•
Updated
•
94
•
8
OPPOer/AndesVL-2B-Instruct
Image-Text-to-Text
•
Updated
•
60
•
7
OPPOer/Qwen-Image-Pruning
Text-to-Image
•
Updated
•
10
•
87
OPPOer/Qwen-Image-Edit-Pruning
Image-to-Image
•
Updated
•
33