See Less, See Right: Bi-directional Perceptual Shaping For Multimodal Reasoning
Abstract
Large vision-language models (VLMs) often benefit from intermediate visual cues, either injected via external tools or generated as latent visual tokens during reasoning, but these mechanisms still overlook fine-grained visual evidence (e.g., polylines in charts), generalize poorly across domains, and incur high inference-time cost. In this paper, we propose Bi-directional Perceptual Shaping (BiPS), which transforms question-conditioned masked views into bidirectional where-to-look signals that shape perception during training. BiPS first applies a KL-consistency constraint between the original image and an evidence-preserving view that keeps only question-relevant regions, encouraging coarse but complete coverage of supporting pixels. It then applies a KL-separation constraint between the original and an evidence-ablated view where critical pixels are masked so the image no longer supports the original answer, discouraging text-only shortcuts (i.e., answering from text alone) and enforcing fine-grained visual reliance. Across eight benchmarks, BiPS boosts Qwen2.5-VL-7B by 8.2% on average and shows strong out-of-domain generalization to unseen datasets and image types.
Community
A framework that leverages programmatically generated paired views to train VLMs to focus on critical visual evidence while rejecting text-only shortcuts.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DiG: Differential Grounding for Enhancing Fine-Grained Perception in Multimodal Large Language Model (2025)
- From Illusion to Intention: Visual Rationale Learning for Vision-Language Reasoning (2025)
- CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization (2025)
- Artemis: Structured Visual Reasoning for Perception Policy Learning (2025)
- Perceptual-Evidence Anchored Reinforced Learning for Multimodal Reasoning (2025)
- Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens (2025)
- Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper