Extract+Think Model Card for markendo/llava-extract-qwen3-0.6B

This repository hosts the Extract-0.6B model, which serves as the perception module for the two-stage Extract+Think framework. This model was presented in the paper Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models.

Extract+Think is an approach designed to address perception and reasoning bottlenecks in small multimodal models. It focuses on visual extraction tuning, explicitly training the model to consistently extract instruction-relevant visual details across tasks, which then feeds into a separate reasoning stage.

Model details

Extract-0.6B is used as the perception module for the two-stage Extract+Think framework. For the reasoning stage, the authors primarily utilize Qwen3 models (1.7B and 4B).

Usage

To use this model, particularly for evaluation, the authors utilize the lmms-eval framework. The setup and evaluation instructions are detailed in the GitHub repository. This involves cloning the repository, installing dependencies, and integrating custom evaluation files with lmms-eval.

For generating extracted visual information, the following command is provided:

cd lmms-eval
model_name=markendo/llava-extract-qwen3-0.6B
python -m lmms_eval \
    --model=llava_onevision \
    --model_args=pretrained=$model_name,conv_template=qwen_1_5,device_map=auto \
    --tasks=mmstar_prism_stage_1 \
    --batch_size=1 \
    --output_path results \
    --log_samples

Please refer to the GitHub repository for full setup instructions, including the second stage of reasoning.

Acknowledgments

This repository is built on top of LLaVA-OneVision and lmms-eval.

Citation

@article{endo2025downscalingintelligence,
  author    = {Endo, Mark and Yeung-Levy, Serena},
  title     = {Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models},
  journal   = {arXiv preprint},
  year      = {2025},
}
Downloads last month
24
Safetensors
Model size
1.0B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for markendo/llava-extract-qwen3-0.6B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(405)
this model

Collection including markendo/llava-extract-qwen3-0.6B